Sunteți pe pagina 1din 335

Mergers and

Acquisitions
Strategic Games

A (short) Introduction to Game Theory

(based on M.J. Osborne An Introduction to Game
Theory Oxford University Press 2004)
Pedagogic approach
More seriously
I assume nothing known
Any question is welcome (if possible, in English)
The goal is to enter into the field, not to cover as much as
possible stuff (I dont care about going to the end of the announced
program but I care a lot on an in-depth understanding)
The mathematical level of the lecture should not be too challenging
(basic equations resolution, some mathematical optimization)
We will play many games during the lecture
You MUST work each week to prepare the lecture (read the slides
in advance, prepare questions, review concept definitions, )
I rest on you to correct my numerous mistakes
This is a theoretical lecture !


Game Theory - A (Short) Introduction 2 9/12/2011
Game Theory - A (Short) Introduction 3 9/12/2011
Outline
1 Introduction
1.1 What is game theory?
1.2 The theory of rational choice
1.3 Coming attractions: interacting decision-makers
2 Nash Equilibrium Theory (perfect information)
2.1 Strategic games
2.2 Example: the Prisoners Dilemma
2.3 Example: Bach or Stravinsky?
2.4 Example: Matching Pennies
2.5 Example: the Stag Hunt
2.6 Nash equilibrium
2.7 Examples of Nash equilibrium
2.8 Best response functions
Game Theory - A (Short) Introduction 4 9/12/2011
Outline
2.9 Dominated actions
2.10 Equilibrium in a single population: symmetric games and
symmatric equilibria
3 Nash Equilibrium: Illustrations
3.5 Auctions
4 Mixed Strategy Equilibrium (probabilistic behavior)
4.1 Introduction
4.2 Strategic games in which players may randomize
4.3 Mixed strategy Nash equilibrium
4.4 Dominated actions
4.5 Pure equilibria when randomization is allowed
4.7 Equilibrium in a single population
Outline
4.9 The formation of players beliefs
4.10 Extension: finding all mixed strategy Nash equilibria
4.11 Extension: games in which each player has a continuum of
actions
4.12 Appendix: Representing preferences by expected payoffs
9 Bayesian Games (imperfect information)
9.1 Motivational examples
9.2 General definitions
9.3 Two examples concerning information
9.6 Illustration: auctions


Game Theory - A (Short) Introduction 5 9/12/2011
Outline
5 Extensive Games (Perfect Information): Theory
5.1 Extensive games with perfect information
5.2 Strategies and outcomes
5.3 Nash equilibrium
5.4 Subgame perfect equilibrium
5.5 Finding subgame perfect equilibria of finite horizon games:
backward induction
10 Extensive Games (Imperfect Information)
10.1 Extensive games with imperfect information
10.2 Strategies
10.3 Nash equilibrium
10.4 Beliefs and sequential equilibrium
10.5 Signaling games
10.8 Illustration: strategic information transmission

Game Theory - A (Short) Introduction 6 9/12/2011
1 Introduction
Game Theory - A (Short) Introduction 8 9/12/2011
1.1 What is game theory?
Game theory aims to help understand situations in which
decision-makers interact.
The main fields of applications are:
Economic analysis
Social analysis
Politic
Biology
Typical applications:
Competing firms
Bidders in auctions
Main tool: model development. This is an arbitrage between:
Realistic assumptions
Simplicity

Game Theory - A (Short) Introduction 9 9/12/2011
1.1 What is game theory?
An outline of the history of game theory
First major development in the 1920s
Emile Borel
John von Neumann
Decisive publication: Theory of Games and Economic Behavior,
von Neumann and Morgenstern (1944)
Early 1950s: John Nash
Nash equilibrium
Game-theoric study of bargaining
1994 Nobel Prize in Economic Sciences
Harsanyi (1920-2000) Bayesian games (Harsanyi doctrine)
Nash (1928-) Nash equilibrium
Selten (1930-) Bounded rationality, extensive games
Game Theory - A (Short) Introduction 10 9/12/2011
1.1 What is game theory?
Modeling process
Step 1: selecting aspects of a given situation (that appear to be
relevant) and incorporating them into a model. This step is mostly
an art
Step 2: model analysis (using logic and mathematic)
Step 3: studying models implications to determine whether our
ideas make sense. This may point towards a revision of the
models assumptions in order to better capture stylized facts.
Game Theory - A (Short) Introduction 11 9/12/2011
1.2 The theory of rational choice
Rational choice:
The decision-maker chooses the best action according to her preferences, among
all the actions available to her
No qualitative restriction is place on preferences
Rationality means consistency of her decisions when faced with different sets of
available actions.

The theory is based on two components: Actions and
Preferences

1.2.1 Actions
Set A consisting of all actions that, under some circumstances, are available to the
decision-maker
In any given situation, the decision-maker knows the subset of available choices,
and takes it as given (the subset is not influenced by the decision-maker
preferences)
Game Theory - A (Short) Introduction 12 9/12/2011
1.2 The theory of rational choice
1.2.2 Preferences and payoff functions
We assume that the decision-maker, when presented with any pair of
actions, knows which of the pair she prefers
We assume further that these preferences are consistent (if a > b and
b > c, then a > c).

Preferences representation: preferences can be represented by a
payoff function:
the payoff function associates a number with each action in such a way
that actions with higher numbers are preferred.
More precisely:
u(a) > u(b) if and only if the decision-maker prefers a to b

(Economists often speak about utility function)
Game Theory - A (Short) Introduction 13 9/12/2011
1.2 The theory of rational choice
Exercise 5.3

Person 1 cares about both her income and person 2 income.
Precisely, the value she attaches to each unit of her own income is
the same as the value she attaches to any two units of person 2s
income. For example, she is indifferent between a situation in
which her income is 1 and person 2s is 0, and one in which her
income is 0 and person 2s is 2. How do her preferences order the
outcomes (1,4), (2,1) and (3,0), where the first component in each
case is her income and the second component is person 2s
income? Give a payoff function consistent with these preferences.
Game Theory - A (Short) Introduction 14 9/12/2011
1.2 The theory of rational choice
Note that, as decision-makers preferences convey only ordinal
information, the payoff function also conveys only ordinal preference.
Eg.: if u(a)=0, u(b)=1 and u(c)=100, it doesnt mean that the decision-
maker likes c a lot more than b! A payoff function contains no such
information.

Note that, as a consequence, a decision-makers preferences can be
represented by many different payoff functions.
If u represents a decision-makers preferences and v is another payoff
function for which
v(a) > v(b) if and only if u(a) > u(b)
then v also represents the decision-makers preferences.

More succinctly: if u represents a decision-markers preferences, then
any increasing function of u also represents these preferences.
Game Theory - A (Short) Introduction 15 9/12/2011
1.2 The theory of rational choice
Exercice 6.1

A decision-makers preferences over the set A={a,b,c} are
represented by the payoff function u for which u(a)=0, u(b)=1 and
u(c)=4. Are they also represented by the function v for which v(a)=-
1,v(b)=0, and v(c)=2? How about the function w for which
w(a)=w(b)=0 and w(c)=8?
Game Theory - A (Short) Introduction 16 9/12/2011
1.2 The theory of rational choice
1.2.3 The theory of rational choice

The theory of rational choice is the action chosen by a decision-
maker is at least as good, according her preferences, as every
other available action.

Note that not every collection of choices for different sets of
available actions is consistent with the theory.
Eg. : we observe that a decision chooses a whenever she faces the set {a,b}, but
sometimes chooses b when facing the {a,b,c}. This is inconsistent:
- always choosing a when facing {a,b} means that the decision-maker prefers a to
b
- when facing {a,b,c}, she must choose a or c.
(Independence of irrelevant alternatives)
Game Theory - A (Short) Introduction 17 9/12/2011
1.2 The theory of rational choice
1.3 Coming attractions
Up to now, the decision-maker cares only about her own choice.
In the real world, a decision-maker often does not control all the
variables that affect her.

Game theory studies situations in which some of the variables that
affect the decision-marker are the actions of other decision-
markers.
2 Nash Equilibrium:
Theory
Game Theory - A (Short) Introduction 19 9/12/2011
2.1 Strategic games
Terminology:
we refer to decision-makers as players
each player has a set of possible actions
the action profile is the list of all players actions
each player has preferences about the action profiles

Definition 13.1 (Strategic game with ordinal preferences)
A strategic game with ordinal preferences consists of
a set of players
for each player, a set of actions
for each player, preferences over the set of action profiles
Game Theory - A (Short) Introduction 20 9/12/2011
2.1 Strategic games
Note that:
This allows to model a very wide range of situations:
players = firms, actions = prices, preferences = profits
players = animals, actions = fighting for a prey, preferences =
winning or loosing
It is frequently convenient to specify the payers preferences by
giving payoff functions that represent them. Keep however in
mind that a strategic game with ordinal preferences is defined by
the players preferences, not by the payoffs that represent these
preferences
Time is absent from the model : each player chooses her action
once and for all and the players choose their actions
simultaneously (no player is informed of the action chosen by any
other player)
Game Theory - A (Short) Introduction 21 9/12/2011
2.2 Example: the Prisoners
Dilemma

Example 14.1
Two suspects in a major crime are held in separate cells. There is
enough evidence to convict each of them of a minor offense, but
not enough evidence to convict either of them of the major crime
unless one of them acts as an informer against the other (finks). If
they both stay quiet, each will be convicted of the minor offense
and spend one year in prison. If one and only one of the finks, she
will be freed and used as a witness against the other, who will
spend four years in prison. If the both fink, each will spend three
years in prison.

Model this situation as a strategic game.
Game Theory - A (Short) Introduction 22 9/12/2011
2.2 Example: the Prisoners
Dilemma
Solution
Players: the two suspects
Actions: Each players set of actions is {Quiet, Fink}
Preferences: Suspect 1s ordering of the action profiles (from
best to worse):
(Fink,Quiet) free
(Quiet,Quiet) one year in prison
(Fink,Fink) three years in prison
(Quiet,Fink) four years in prison
(and vice-versa for player 2)

We can adopt a payoff function for each player:
u
1
(Fink,Quiet)>u
1
(Quiet,Quiet)>u
1
(Fink,Fink)>u
1
(Quiet,Fink)
Eg.:

F Q F F Q Q Q F , , , ,
1 0 1 1 1 2 1 3 + + +
Game Theory - A (Short) Introduction 23 9/12/2011
2.2 Example: the Prisoners
Dilemma
Graphically, the situation is the following :
(numbers are payoffs of payers)
Suspect 1
Suspect 2
Quiet
Fink
Quiet Fink
(2,2)
(3,0)
(0,3)
(1,1)
The prisoners dilemma models a situation in which there are gains from cooperation
(each player prefers that both players choose Quiet than they both choose Fink) but
each player has an incentive to free ride whatever the other play does.
Game Theory - A (Short) Introduction 24 9/12/2011
2.2 Example: the Prisoners
Dilemma
2.2.1 Working on a joint project
You are working with a friend on a joint project. Each of you
can either work hard or goof off. If your friend works hard, then
you prefer to goof off (the outcome of the project would be
better if you worked hard too, but the increment in its value to
you is not worth the extra effort). You prefer the outcome of
your both working hard to the outcome of your both goofing off
(in which case nothing gets accomplished), and the worst
outcome for you is that you work hard and your friend goofs off
(you hate to be exploited).

Model this situation as a strategic game.
Game Theory - A (Short) Introduction 25 9/12/2011
2.2 Example: the Prisoners
Dilemma
2.2.2 Duopoly

In a simple model of a duopoly, two firms produce the same
good, for which each firm charges either a low price or a high
price. Each firm wants to achieve the highest possible profit. If
both firms choose High, then each earns a profit of $1000. If
one firm chooses High and the other chooses Low, then the firm
choosing High obtains no customers and makes a loss of $200,
whereas the firm choosing Low earns a profit of $1200 (its unit
profit is low, but its volume is high). If both firms choose Low,
the each earns a profit of $600. Each firm cares only about its
profit.

Model this situation as a strategic game.
Game Theory - A (Short) Introduction 26 9/12/2011
2.2 Example: the Prisoners
Dilemma
Exercise 17.1
Determine whether each of the following games differs from the
Prisoners Dilemma only in the names of the players actions
X
Y
X
Y
X Y X Y
3,3
5,1
1,5
0,0
2,1
3,-2
0,5
1,-1
An application to M&As: the Grossman & Hart free riding argument.
Game Theory - A (Short) Introduction 27 9/12/2011
2.3 Example: Back or Stravinsky?
(Battle of the Sexes or BoS)
Situation:
Players agree that it is better to cooperate
Players disagree about the best outcome

Example 18.2
Two people wish to go out together. Two concerts are available:
one of music by Bach, and one of music by Stravisky. One person
prefers Bach and the other prefers Stravinsky. If they go to different
concerts, each of them is equally unhappy listening to the music of
either composer.

Model this situation as a strategic game.
An application to merging banks: two banks are merging. Both
agree that they will be better off using the same information
system technology but they disagree on which one to choose.
Google versus Microsoft/Yahoo
Game Theory - A (Short) Introduction 28 9/12/2011
Game Theory - A (Short) Introduction 29 9/12/2011
2.3 Example: Back or Stravinsky?
(Battle of the Sexes or BoS)
Solution
Player 1
Player 2
Bach
Stravinsky
Bach Stravinsky
(2,1)
(0,0)
(0,0)
(1,2)
Game Theory - A (Short) Introduction 30 9/12/2011
2.4 Example: Matching Pennies
Situation:
A purely conflictual situation

Example 19.1

Two people choose, simultaneously, whether to show the head or
the tail of a coin. If they show the same side, person 2 pays person
1 a dollar. I they show different sides, person 1 pays person 2 a
dollar. Each person cares only about the amount of money she
receives (and is a profit maximizer!).

Model this situation as a strategic game.

An application to choices of appearances for new products by an established
produced and a new entrant in a market of fixed size: the established produced
prefers the newcomers product to look different from its own (to avoid confusion)
while the newcomer prefers that the products look alike.
IPhone iOS versus Android
Game Theory - A (Short) Introduction 31 9/12/2011
Game Theory - A (Short) Introduction 32 9/12/2011
2.4 Example: Matching Pennies
Solution
Player 1
Player 2
Head
Tail
Head Tail
(1,-1)
(-1,1)
(-1,1)
(1,-1)
Game Theory - A (Short) Introduction 33 9/12/2011
2.5 Example: the stag Hunt
Situation:
Cooperation is better for both but not credible.

Example 20.2
Each of a group of hunters has two options: she may remain
attentive to the pursuit of a stag, or she may catch a hare. If all
hunters pursue the stag, they catch it and share it equally. If any
hunter devotes her energy to catching a hare, the stag escapes,
and the hare belongs to the defecting hunter alone. Each hunter
prefers a share of the stag to a hare.

Model this situation as a strategic game.
Game Theory - A (Short) Introduction 34 9/12/2011
2.5 Example: the stag Hunt
Solution
Player 1
Player 2
Stag
Hare
Stag Hare
(2,2)
(1,0)
(0,1)
(1,1)
Game Theory - A (Short) Introduction 35 9/12/2011
2.6 Nash equilibrium
Question:
What actions will be chosen by players in a strategic game?
(assuming that each player chooses the best available action)

Answer:
To make a choice, each player must form a belief about other players
action.

Assumption:
We assume in strategic games that players beliefs are derived from
their past experience playing the game:
they know how their opponent will behave.
note however that they do not know which specific opponent they are faced to and
so, they can not condition their behavior on being faced to a specific opponent.
Beliefs are about typical opponents, not any specific set of opponents.
Game Theory - A (Short) Introduction 36 9/12/2011
2.6 Nash equilibrium
In this setup, a Nash equilibrium is action profile a* with the
property that no player i can do better by choosing an action
different from a*
i
, given that every other player j adheres to a*
j
.

Note:
A Nash equilibrium corresponds to a steady state: if, whenever the
game is played, the action profile is the same Nash equilibrium a*,
then no player has a reason to choose any action different from her
component of a*.
Players beliefs about each others actions are (assumed to be)
correct. This implies, in particular, that two players beliefs about a
third players action are the same (expectations are coordinated
Harsanyi Doctrine).
Two key ingredients: rational choices and correct beliefs
Game Theory - A (Short) Introduction 37 9/12/2011
2.6 Nash equilibrium
Notations and formal definition:
Let a
i
be the action of player i
Let a be an action profile: a=(a
1
, a
2
, a
n
)
Let a
i
be any action of player i (different from a
i
)
Let (a
i
,a
-i
) be the action profile in which every player j except i
chooses her action a
j
as specified by a, whereas player i chooses
a
i
(the subscript i stands for except i).
(a
i
,a
-i
) is the action profile in which all the players other than i
adhere to a while i deviates to a
i
.
Note that if a
i
=a
i
, then (a
i
,a
-i
) = (a
i
,a
-i
) =a
Game Theory - A (Short) Introduction 38 9/12/2011
2.6 Nash equilibrium
Definition 23.1 (Nash equilibrium of strategic game with ordinal
preferences)

The action profile a* in a strategic game with ordinal
preferences is a Nash equilibrium if, for every player i and every
action a
i
of player i, a* is at least as good according to player is
preferences as the action profile (a
i
,a*
-i
) in which player i
chooses a
i
while every other player j chooses a*
i
.

Equivalently:

u
i
(a*) u
i
(a
i
, a*
-i
) for every action a
i
of player i
Game Theory - A (Short) Introduction 39 9/12/2011
2.6 Nash equilibrium
Note:
This definition implies neither that a strategic game necessarily has a
Nash equilibrium, nor that it has at most one.
This definition is designed to model a steady state among experienced
players. An alternative approach (called rationalizability) is:
to assume that players know each others preferences
to consider what each player can deduce about the other players action
from their rationality and their knowledge of each others rationality
Nash equilibrium has been studied experimentally.
The keys to conceive suited experiment are:
to ensure that players are experienced playing the game
to ensure that players do not face repeatedly the same opponents (as each
game must played in isolation)
The key to correctly interpret results is to remember that Nash
equilibrium is about equilibrium: the outcome must have converged (and
the theory says nothing about the necessary for convergence to
appear).
Game Theory - A (Short) Introduction 40 9/12/2011
2.7 Examples of Nash
equilibrium
2.7.1 Prisoners Dilemma
Suspect 1
Quiet
Fink
Quiet Fink
(2,2)
(3,0)
(0,3)
(1,1)
Suspect 2
Game Theory - A (Short) Introduction 41 9/12/2011
2.7 Examples of Nash
equilibrium
Detailed explanation
(Fink, Fink) is a Nash equilibrium because:
given that player 2 chooses Fink, player 1 is better off choosing
Fink than Quiet
given that player 1 chooses Fink, player 2 is better off choosing
Fink than Quiet
No other action profile is a Nash equilibrium. Eg, (Quiet, Quiet) is
not a Nash equilibrium because:
if player 2 chooses Quiet, player 1 is better off choosing Fink
(moreover), if player 1 chooses Quiet, player 2 is also better off
choosing Fink
The incentive to free ride eliminates the possibility that
the mutually desirable outcome (Quiet, Quiet) occurs.
Game Theory - A (Short) Introduction 42 9/12/2011
2.7 Examples of Nash
equilibrium
Note that:
in the present case, the Nash equilibrium action is the best action
for each player:
if the other player chooses her equilibrium action (Fink)
but also if the other player chooses her other action (Quiet)
In this sense, this equilibrium is highly robust. But, this is not a
requirement of the Nash equilibrium. Only the first condition
must be met.

Game Theory - A (Short) Introduction 43 9/12/2011
2.7 Examples of Nash
equilibrium
Exercise 27.1
Each of two players has two possible actions, Quiet and Fink;
each action pair results in the players receiving amounts of
money equal to the numbers corresponding to that action pair in
the following figure:
Quiet
Fink
Quiet Fink
(2,2)
(3,0)
(0,3)
(1,1)
Player 1
Player 2
Game Theory - A (Short) Introduction 44 9/12/2011
2.7 Examples of Nash
equilibrium
Players are not selfish: the preferences of each player i are
represented by the payoff function m
i
(a)+ m
j
(a), where m
i
(a) is
the amount of money received by player i, j is the other player,
and is a given non-negative number. Player 1s payoff to the
action pair (Quiet,Quiet) is, for example, 2 + 2.

1. Formulate the strategic game that models this situation in the case
=1. Is this game the Prisoners dilemma?
2. Find the range of values of for which the resulting game is the
Prisoners dilemma. For values of for which the game is not the
Prisoners dilemma, find the Nash equilibria.
Game Theory - A (Short) Introduction 45 9/12/2011
2.7 Examples of Nash
equilibrium
2.7.2 BoS

Player 1
Player 2
Bach
Stravinsky
Bach Stravinsky
(2,1)
(0,0)
(0,0)
(1,2)
Nash equilibria are (B,B) and (S,S). Why?
Note that this means that BoS has two steady states!
Game Theory - A (Short) Introduction 46 9/12/2011
2.7 Examples of Nash
equilibrium
2.7.3 Matching Pennies
Player 1
Player 2
Head
Tail
Head Tail
(1,-1)
(-1,1)
(-1,1)
(1,-1)
There is no Nash equilibrium. Why?
Game Theory - A (Short) Introduction 47 9/12/2011
2.7 Examples of Nash
equilibrium
2.7.4 The Stag Hunt
Player 1
Player 2
Stag
Hare
Stag Hare
(2,2)
(1,0)
(0,1)
(1,1)
Nash equilibria are (S,S) and (H,H). Why?
Note that, despites (S,S) is better for both players than (H,H), this
has no bearing on the equilibrium status of (H,H).
Game Theory - A (Short) Introduction 48 9/12/2011
2.7 Examples of Nash
equilibrium
Exercise 30.1 (extension to n players)
Consider the variants of the n-hunter Stag Hunt in which only m
hunters, with 2mn, need to pursue the stag in order to catch it
(continue to assume that there is a single stag). Assume that a
captured stag is shared only by the hunters who catch it. Under
each of following assumptions on the hunters preferences, find
the Nash equilibria of the strategic game that models the
situation.
a. As before, each hunter prefers the fraction 1/m of the stag to
a hare;
b. Each hunter prefers a fraction 1/k of the stag to a hare, but
prefers a hare to any smaller fraction of the stag, where k is an
integer with mkn.
Game Theory - A (Short) Introduction 49 9/12/2011
2.7 Examples of Nash
equilibrium
Note
In games with many Nash equilbria, the theory isolates more than
one steady state but says nothing about which one is more likely to
appear.
In some games, however, some of these equilibria seem more
likely to attract the players attentions than others. These equilibria
are called focal.
Example: (B,B) seems here more likely than (S,S)
Player 1
Player 2
Bach
Stravinsky
Bach Stravinsky
(2,2)
(0,0)
(0,0)
(1,1)
Game Theory - A (Short) Introduction 50 9/12/2011
2.7 Examples of Nash
equilibrium
2.7.8 Strict and nonstrict equilibria
The definition 23.1 requires only that the outcome of a deviation (by
a player) be no better for the deviant than the equilibrium outcome.
A equilibrium is strict if each players equilibrium action is better
than all her other actions, given the other players actions:

u
i
(a*) > u
i
(a
i
, a*
-i
) for every action a
i
a*
i
of player i

(Note the strict inequality, contrasting with definition 23.1)

Game Theory - A (Short) Introduction 51 9/12/2011
2.8 Best Response Functions
2.8.1 Definition
In more complicated games, analyzing one by one each action
profile quickly becomes intractable.
Let us denote the set of player i best actions when the list of the
other players actions is a
-i
by B
i
(a
-i
) or, more precisely:
{ }
i i i i i i i i i i i i
A a a a u a a u A a a B in ' all for ) , ' ( ) , ( : in ) (

> =
Any action in B
i
(a
-i
) is at least as good for player i as
every other action of player i when the other players
actions are given by a
-i
.
Game Theory - A (Short) Introduction 52 9/12/2011
2.8 Best Response Functions
2.8.2 Using best response functions to define Nash equilibrium
Proposition 36.1: The action profile a* is a Nash equilibrium of a
stragetic game with ordinal preferences if and only if every players
actions is a best response to the other players actions:


If each player i has a single best response to each list a
-i

(B
i
(a
-i
) = {b
i
(a*
-i
)}), then this is equivalent to:


The Nash Equilibrium is then characterized by a set of n equations in
the n unknowns a*
i
:
i a B a
i i i
player every for ) ( in is
* *

i a b a
i i i
player every for ) (
* *

=
) ,... (
...
) ,... (
*
1
*
1
*
* *
2 1
*
1

=
=
n n n
n
a a b a
a a b a
Game Theory - A (Short) Introduction 53 9/12/2011
2.8 Best Response Functions
2.8.3 Using the best response functions to find Nash equilibria
Procedure:
1. find the best response function of each player
2. find the action profiles that satisfy proposition 36.1
Exercise 37.1.b
Find the Nash Equiliria of the game in Figure 38.1
Represents graphically the solution

2,2 1,3 0,1
3,1 0,0 0,0
1,0 0,0 0,0
T
M
B
L C R
Game Theory - A (Short) Introduction 54 9/12/2011
2.8 Best Response Functions
Solution
2,2 1*,3* 0*,1
3*,1* 0,0 0*,0
1,0* 0,0* 0*,0*
T
M
B
L C R
Player 1
Player 1
T M B
L
C
R
Player 2
Player 2
Game Theory - A (Short) Introduction 55 9/12/2011
2.8 Best Response Functions
Example 39.1
Two individuals are involved in a synergistic relationship. If both
individuals devote more effort to the relationship, they are both
better off. For any given effort of individual j, the return to individual
is effort first increases, then decreases. Specifically, an effort level
is a nonnegative number, and individual is preferences (for i=1,2)
are represented by the payoff function a
i
(c+a
j
-a
i
), where a
i
is i effort
level, a
j
is the other individuals effort level, c>0 is a constant.
Questions:
Model the situation as a strategic game
Find players best response functions
Find the Nash equilibrium
Represent graphically the situation
Game Theory - A (Short) Introduction 56 9/12/2011
2.8 Best Response Functions
Strategic game:
Players: the two individuals
Actions: each players set of actions is the set of effort levels (non
negative numbers)
Preferences: player is preferences are represented by payoff
function a
i
(c+a
j
-a
i
), for i=1,2

Note that each player has infinitely many actions, so the game can not
be represented by a matrix of payoff, as previously.


Game Theory - A (Short) Introduction 57 9/12/2011
2.8 Best Response Functions
Best response function:
Intuitive construction
Given a
j
, individual i payoff is a quadratic function of a
i
, that is
zero when a
i
=0 and when a
i
=c+a
j
. As quadratic function are
symmetric, this implies that the player i best response to a
j
is:

) (
2
1
) (
j j i
a c a b + =
0
Payoff
a
i
c+a
j

Game Theory - A (Short) Introduction 58 9/12/2011
2.8 Best Response Functions
Mathematical construction
) (
2
1
0 2
2
) (
*
2
j i
i j
i j
i
i i j i
i j i
a c a
a a c
FOC
a a c
a
a a a ca
a a c a
+ =
= +
+ =
c
H c
+ = H
+ = H
Game Theory - A (Short) Introduction 59 9/12/2011
2.8 Best Response Functions
Nash equilibrium:
To find the Nash equilibrium, following proposition 36.1, we have to
solve the following system of equations:




By substitution, we get:
) (
2
1
) (
2
1
1 2
2 1
a c a
a c a
+ =
+ =
c a
a c
a c c a
=
+ =
+ + =
1
1
1 1
: So
4
1
4
3
)) (
2
1
(
2
1
The unique Nash equilibrium
is (c,c)
Game Theory - A (Short) Introduction 60 9/12/2011
2.8 Best Response Functions
Graphical representation
Player 1
Player 2
0 a
1

a
2

c
c
c
c
b
1
(a
2
)
b
2
(a
1
)
Game Theory - A (Short) Introduction 61 9/12/2011
2.8 Best Response Functions
Note that:
The best response of a player to actions of other players needs not
to be unique. If a player has many best responses to some of the
other players actions, then her best response function is thick (a
surface) at some points;
Nash equilibrium needs not to exist: the best response function
may not cross;
If best response functions are not linear, the Nash equilibria need
not to be unique;
Best response function can be discontinuous, generating another
set of difficulties
Game Theory - A (Short) Introduction 62 9/12/2011
2.8 Best Response Functions
Exercice 42.1
Find the Nash Equilibria of the two-player strategic game in which
each players set of actions is the set of nonnegative numbers and
the players payoff functions are u
1
(a
1
,a
2
)=a
1
(a
2
-a
1
) and
u
2
(a
1
,a
2
)=a
2
(1-a
1
-a
2
)
Game Theory - A (Short) Introduction 63 9/12/2011
2.9 Dominated actions
2.9.1 Strict dominations
In any game, a players action strictly dominates another action if
it is superior, no matter what the other player do.
Definition 45.1 (Strict domination): in a strategic game with ordinal
preferences, player is action a
i
strictly dominates her action a
i
if:


Action a
i
is said to be strictly dominated.

Example: in the Prisoners Dilemma,
the action Fink strictly dominates
the action Quiet
Quiet
Fink
Quiet Fink
(2,2)
(3,0)
(0,3)
(1,1)
Game Theory - A (Short) Introduction 64 9/12/2011
2.9 Dominated actions
Note that, as a strictly dominated action is not a best response to
any actions of the other players, a strictly dominated action is not
used in any Nash equilibrium.
When looking for Nash equilibria of a game, we can therefore
eliminate from consideration all strictly dominated actions.

2.9.2 Weak domination
In any game, a players action weakly dominates another action if
the first action is at least as good as the second action, no matter
what the other players do, and is better than the second action for
some actions of the other players.
Game Theory - A (Short) Introduction 65 9/12/2011
2.9 Dominated actions
Definition 46.1 (Weak domination) : In a strategic game with ordinal
preferences, player is action a
i
weakly dominates her action a
i
if:




Note that is a strict Nash equilibrium, no players equilibrium action
is strictly dominated but in a nonstrict Nash equilibrium, an action
can be weakly dominated.
actions players' other of list every for ) , ( ) , (
' ' '
i i i i i i i
a a a u a a u

>
actions players' other of list some for ) , ( ) , (
' ' '
i i i i i i i
a a a u a a u

>
Game Theory - A (Short) Introduction 66 9/12/2011
2.9 Dominated actions
Exercise 47.1 (Strict equilibria and dominated actions)
For the game in Figure 48.1, determine, for each player,
whether any action is strictly dominated or weakly dominated.
Find the Nash equilibria of the game. Detemine whether any
equilibrium is strict.
0,0 1,0 1,1
1,1 1,1 3,0
1,1 2,1 2,2
T
M
B
L C R
Game Theory - A (Short) Introduction 67 9/12/2011
2.9 Dominated actions
2.9.4 Illustration: collective decision-making
The members of a group of people are affected by a policy,
modeled as a number. Each person i has a favorite policy, denoted
x*
i
. She prefers the policy y to the policy z if and only if y is closer to
x*
i
than is z. The number of n people is odd. The following
mechanism is used to choose the policy:
each person names a policy
the policy chosen is the median of those named
Eg.: if there are five people, and they name the policies -2, 0,0.6,5
and 10, the policy 0.6 is chosen.
Questions:
Model this situation as a strategic game
Find the equilibrium strategy of the players
Does anyone have an incentive to name her favorite policy?
Game Theory - A (Short) Introduction 68 9/12/2011
2.9 Dominated actions
Strategic game:
Players: n people
Actions: each persons set of actions is the set of policies
(numbers)
Preferences: each person i prefers the action profile a to the action
profile a if and only if the median policy named in a is closer to x*
i
than is the median policy named in a.

Equilibrium strategy of the players:
Claim: for each player i, the action of naming her favorite policy x*
i

weakly dominates all her other actions.
Why ?


Game Theory - A (Short) Introduction 69 9/12/2011
2.9 Dominated actions
Proof:
Take x
i
> x*
I
(reporting a higher policy than the preferred one)

a. for all actions of the other players, player i is at least as well off
naming x*
i
as she is naming x
i

for any list of actions of the players other than player i, denote the value
of the (n-1)th highest action by a- and the value of (n+1)th highest
action a+ (so that half of the remaining players actions are at most a-
and half of them are at least a+).
if x*
i
a+ : the median policy is the same whether player i names x*
i
or x
i
(as
x
i
> x*
i
).
if x
i
a- : the same hold true (as x*
i
< x
i
)
if x*
i
< a+ and x
i
> a-, then
when the player i names x*
i
, the median policy is at most the greater of
x*
i
and a-
when the play i names x
i
, the median policy is at least the lesser of x
i

and a+.
Thus, player i is worse off naming x
i
than naming x*
i
.
a- at (n-1)th
a+ at (n+1)th
n
n
0
Game Theory - A (Short) Introduction 70 9/12/2011
2.9 Dominated actions
b. for some actions of the other players, player i is better of naming x*
i

than she is naming x
i

Suppose that half of the remaining players name policies less than x*
i

and half of them name policies greater than x
i
. Then the outcome is x*
i
if
player i names x*
i
and x
i
if she names x
i
. Thus player i is better off
naming x
i
than she is naming x*
i
.

A symmetric argument applies when x
i
< x*
i
.
Telling the truth weakly dominates all other action.
Game Theory - A (Short) Introduction 71 9/12/2011
2.10 Equilibrium in a single
population: symmetric games
We focus here in cases where we want to model the interaction
between members of a single homogenous population of players.
Players interact anonymously and symmetrically.

Definition 51.1 (Symmetric two-player game with ordinal preferences)
A two-player strategic game with ordinal preferences is symmetric if the
players sets of actions are the same and the players preferences are
represented by payoff functions u
1
and u
2
for which u
1
(a
1
,a
2
)=u
2
(a
2
,a
1
)
for every action pair (a
1
,a
2
)

Definition 52.1 (Symmetric Nash equilibrium)
An action profile a* in a strategic game with ordinal preferences in which
each player has the same set of actions is a symmetric Nash
equilibrium if it is a Nash equilibrium and a*
i
is the same for every
player i.
Game Theory - A (Short) Introduction 72 9/12/2011
2.10 Equilibrium in a single
population: symmetric games
Exercise 52.2
Find all the Nash equilibria of the game in Figure 53.1. Which of
the equilibria, if any, correspond to a steady state if the game
models pairwise interactions between the members of a single
population?
1,1 2,1 4,1
1,2 5,5 3,6
1,4 6,3 0,0
A
B
C
A B C
3 Nash Equilibrium:
Illustrations
Game Theory - A (Short) Introduction 74 9/12/2011
3.5 Auctions
3.5.1 Introduction
Auctions are used to allocate significant economic resources, from works of art to
short-term government bonds to radio spectrum
Auctions are of many form:
Sequential or sealed bid (simultaneous)
First or Second price
Ascending (English) or Descending (Dutch)
Single or Multi-Units
With or without reservation price
With or without entry costs

Auctions:
exist since long ago (annual auction of marriageable womans in Babylonians
villages
and remain up-to-date (EBay on Internet)
Main questions
What are the designs likely to be the most effective at allocating resources?
What are the designs more likely to raise the most revenue?


Game Theory - A (Short) Introduction 75 9/12/2011
3.5 Auctions
Main assumption: we discuss here auctions in which every
buyer knows her own valuation and every other buyers
valuation of the item being sold

Buyers are perfectly informed.

This assumption will be dropped in Chapter 9.
Game Theory - A (Short) Introduction 76 9/12/2011
3.5 Auctions
3.5.2 Second-price sealed-bid auctions
In a common form of auction, people sequentially submit increasing
bids for an object. When no one wish to submit a higher bid than the
current bid, the person making the current bid obtains the object at the
price shed bid.
Given that every person is certain of her valuation (perfect valuation) of
the object before the bidding begins, during the bidding, no one can
learn anything relevant to her actions.
Thus we can model the auction by assuming that each person decides,
before bidding begins, the most she is willing to bid (her maximal bid).
During the bidding, eventually, only the person with the maximal bid and
the one with the second highest maximal bid will be left competing
against each other.
To win, the person with the highest maximal bid needs therefore to bid
slightly more than the second highest maximal bid.
Game Theory - A (Short) Introduction 77 9/12/2011
3.5 Auctions
We can therefore model such an ascending auction as a strategic
game in which each player chooses an amount of money (the
maximal amount she is willing to bid) and the player who chooses
the highest amount obtains the object and pays a price equal to the
second highest amount.


This game model also a situation in which the people
simultaneously put bids in sealed envelopes, and the person who
submits the highest bid wins and pays a price equal to the second
highest bid.

In a perfect information context, ascending auctions (or English
auctions) and second-price sealed bid auction are modeled by the
same strategic game.
Game Theory - A (Short) Introduction 78 9/12/2011
3.5 Auctions
Notations
v
i
: the value player i attaches to the object
p: price paid for the object
v
i
-p: winning player payoff
n: number of players
number the players such that v
1
>v
2
> > v
n
>0
b
i
: sealed bid submitted by each player

Rules
Each player submit a sealed bid b
i
If b
i
is the highest bid, player i wins the auction, get the object and pays
the second highest bid (say j). In such a case, player i payoff is v
i
-b
j

In case of tie, it is the player with the smallest number (the highest
valuation) who wins. She pays her own bid (as there is a tie)
Game Theory - A (Short) Introduction 79 9/12/2011
3.5 Auctions
Strategic game representation:

Players: the n bidders, where n 2

Actions: the set of actions of each player is the set of possible bids
(nonnegative numbers)

Preferences: denote by b
i
the bid of player i and by b+ the highest
bid submitted by a player other than i. If either b
i
>b+ or b
i
=b+ and
the number of every other player who bids b+ is greater than i, then
player is payoff is v
i
-b+. Otherwise player is payoff is 0.
Game Theory - A (Short) Introduction 80 9/12/2011
3.5 Auctions
Nash equilibrium
The game has many Nash equilibria:
One equilibrium is (b
1
,b
2
, b
n
)=(v
1
,v
2
, v
n
): each player bid is equal to
her valuation of the object:

because v
1
>v
2
> > v
n
, the outcome is that player 1 obtains the object and
pays b
2
. Her payoff is v
1
-b
2
. Every other players payoff is zero.

if player 1 changes he bid to some other price at least equal to b
2
, then the
outcome does not change. If she changes her bid to a price less than b
2
, then
she loses and obtains a zero payoff

if some other player lowers her bid or raises her bid to some price at most
equal to b
1
, the she remains a loser. If she raises her bid above b
1
, then she
wins but, in paying the price b
1
, she makes a loss (because her valuation is
less then b
1
).
Game Theory - A (Short) Introduction 81 9/12/2011
3.5 Auctions
Another equilibrium is (b
1
,b
2
, b
n
)=(v
1
,0, 0): the player 1 obtains the
object and pays 0. Sad is issue for the auctioneer
Another equilibirum is (b
1
,b
2
, b
n
)=(v
2
,v
1
, 0 0): the player 2 bids v
1

and obtains the object at price v
2
and every players payoff is zero:
if player 1 raises her bid to v
1
or more, she wins the object but her payoff
remains zero (she pays the price v
1
, bid by player 2)
if player 2 changes her bid to some other prices greater than v
2
, the outcome
does not change. If she changes her bid to v
2
or less, she loses, and her
payoff remains zero.
if any other player raises her bid to a most v
1
, the outcome does not change.
If she raises her bid above v
1
, then she wins but get a negative payoff.
Note that, in this equilibrium, player 2 bids more than her valuation. This
might seem strange. This is due to the fact that, in a Nash equilibrium, a
player does not consider the risk that another player will take an action
different from her equilibrium action. Each player simply chooses an
action that is optimal, given the other players actions.
This however suggests that this equilibrium is less plausible as an
outcome of the auction than the equilibrium in which each bidder bids
her valuation.


Game Theory - A (Short) Introduction 82 9/12/2011
3.5 Auctions
This is due to the fact that:

in a second-price sealed-bid auction (with perfect information),
a players bid equal to her valuation weakly dominates all her
other bids.

That is:

for any bid b
i
v
i
, player i bid v
i

is at least as good as b
i
, no
matter what the other players bid, and is better than b
i
for
some actions of the other players.


Game Theory - A (Short) Introduction 83 9/12/2011
3.5 Auctions
The precise argument is given by Figure 85.1
0 b+
v
i
-b+
v
i
0 b+
v
i
-b+
v
i
b
i
v
i
is better than b
i

in this region
0 b+
v
i
-b+
v
i

b
i
v
i
is better than b
i

in this region
The Figure compares player i payoffs to the bid v
i
(left panel) with her payoff to a bid b
i
< v
i
(middle
panel) and with her payoff to a bid b
i
> v
i
, as a function of the highest of the other players bids (b+).

We see that:
-for all value of b+, player i payoff to a bid v
i
is a least as large as her payoffs to any other bid;
-for some values of the b+, her payoffs to v
i
exceed her payoff to any other bid.
Game Theory - A (Short) Introduction 84 9/12/2011
3.5 Auctions
Exercise 84.1
Find a Nash equilibrium of a second-price sealed bid auction in
which player n obtains the object.

Exercise 86.1 (Auctioning the right to choose)
An action affects each of two people. The right to choose the
action is sold in a second-price auction. That is, the two people
simultaneously submit bids, and the one who submits the higher
bid chooses her favorite action and pays (to a third party) the
amount bid by the other person, who pays nothing. Assume that if
the bids are the same, person 1 is the winner.
For i=1,2, the payoff of person i when the action is a and person i
pays m is u
i
(a)-m.
In the game that models this situation, find for each player a bid
that weakly dominates all the players other bids (and thus find a
Nash equilibrium in which each players equilibrium action weakly
dominates all her other actions).


Game Theory - A (Short) Introduction 85 9/12/2011
3.5 Auctions
3.5.3 First-price sealed-bid auctions
Difference with as second-price auction: the winner pays the price
she bids

Strategic game representation:
Players: the n bidders, where n2
Actions: the set of actions of each player is the set of possible
bids (nonnegative numbers)
Preferences: denote by b
i
the bid of player i and by b+ the
highest bid submitted by a player other than i. If either (a) b
i
>
b+ or (b) b
i
= b+ and the number of every other player who
bids b+ is greater than i, then player i payoff is v
i
-b
i
. Otherwise,
player i payoff is 0.

Game Theory - A (Short) Introduction 86 9/12/2011
3.5 Auctions
Note that this game models:
a sealed-bid auction where the highest bid wins
but also
a dynamic auction in which the auctioneer begins by
announcing a high price, which she gradually lowers until
someone indicates her willingness to buy the object (a
Dutch auction)
(this equivalence is even, in some sense, stronger than
the one between an ascending auction and second-price
sealed-bid auction does not depend on private values).
Nash equilibrium
One Nash equilibrium is (b
1
,b
2
, b
n
)=(v
2
,v
2
, v
n
), in which
player 1 bid is player 2 valuation and every other players bid is
her own valuation. The outcome is that player 1 obtains the
object at price v
2
.
Game Theory - A (Short) Introduction 87 9/12/2011
3.5 Auctions
Exercise 86.2
Show that (b
1
,b
2
, b
n
)=(v
2
,v
2
, v
n
) is a Nash equilibrium
of a first-price sealed-bid auction.

A first-price sealed-bid auction has many other equilibria, but in all
equilibria the winner is the player who values the object most highly
(player 1), by the following argument:
in any action profile (b1, bn) in which some player i1 wins,
we have bi > b1.
If b
i
> v
2
, then i payoff is negative, so that she can do
better by reducing her bid to 0
if b
i
v
2
, then player 1 can increase her payoff from 0 to
v
1
-b
i
by bidding b
i
, in which case she wins.
Game Theory - A (Short) Introduction 88 9/12/2011
3.5 Auctions
Exercise 87.1 (First-price sealed-bid auction)
Show that in a Nash equilibrium of a first-price sealed-bid
auction the two highest bids are the same, one of these
bids is submitted by player 1, and the highest bid is at
least v
2
and at most v
1
. Show also that any action profile
satisfying these conditions is a Nash equilibrium.
Game Theory - A (Short) Introduction 89 9/12/2011
3.5 Auctions
As in the second-price auction sealed-bid auction, the potential
riskiness to player i of a bid b
i
> v
i
is reflected in the fact that it is
weakly dominated by the bid v
i
, as shown by the following argument:
if the other players bids are such that player i loses when she bids
b
i
, then the outcome is the same whether she bids b
i
or v
i
it the other players bids are such that player i wins when she bids
b
i
, then her payoff is negative when she bids b
i
and zero when she
bids v
i
(regardless of whether this bid wins)
However, unlike a second-price auction, in a first-price auction, a bid b
i

< v
i
of player i is not weakly dominated by the bid v
i
(it is in fact not
weakly dominated by any bid):
it is not weakly dominated by a bid b
i
<b
i
because if the other
players highest bid is between b
i
and b
i
, then b
i
loses whereas b
i

wins and yields player i a positive payoff
it is not weakly dominated by a bid b
i
>b
i
because if the other
players highest bid is less than b
i
, then both b
i
and b
i
win and b
i

yield a lower price.
Game Theory - A (Short) Introduction 90 9/12/2011
3.5 Auctions
Note also that, though the bid v
i
weakly dominates higher bids,
this bid is itself weakly dominated by a lower bid! The
argument is the following:
if player i bids v
i
, her payoff is 0 regardless of the other
players bids
whereas, if she bids less than v
i
, her payoff is either 0 (if
she loses) or positive (if she wins)
In a first-price sealed-bid auction (with perfect information), a players
bid of at least her valuation is weakly dominated, and a bid of less than
her valuation is not weakly dominated.


Game Theory - A (Short) Introduction 91 9/12/2011
3.5 Auctions
Note finally that this property of the equilibria depends on the
assumption that a bid may be any number. In the variant of the game in
which bids and valuations are restricted to be multiples of some discrete
monetary unit ,
an action profile (v
2
-, v
2
- , b
3
, b
n
) for any b
j
v
j
- for j = 3, n
is a Nash equilibrium in which no players bid is weakly dominated.
further, every equilibrium in which no players bid is weakly
dominated takes this form.
If is small, this is very close to (v
2
, v
2
, b
3
, b
n
) : this equilibrium is
therefore (on a somewhat ad-hoc basis) considered as the distinguished
equilibria of a first-price sealed-bid auction.
One conclusion of this analysis is that, while both second-price and first-price
auctions have many Nash equilibria, their distinguished equilibria yield the
same outcome: in every distinguished equilibrium of each game, the object is
old to player 1 at the price v
2
. This is notion of revenue-equivalence is a
cornerstone of the auction theory and will be analyzed in depth later.
Game Theory - A (Short) Introduction 92 9/12/2011
3.5 Auctions
3.5.4 Variants
Uncertain valuation: we have assumed that each bidder is certain
of both her own valuation and every other bidders valuation, which
is highly unrealistic. We will study the case of imperfect information
in Chap. 9 (in the framework of Bayesian games)
Interdependent/Common valuations: in some auction, the main
difference between bidders is not that they value the object
differently but that they have different information about its value
(eg, oil tract auctions). As this also involve informational
considerations, we will again study this in Chap. 9.
All-pay auctions: in some auctions, every bidder pay, not only the
winner (eg, competition of loby groups for government attention).
Game Theory - A (Short) Introduction 93 9/12/2011
3.5 Auctions
Mutiunit auctions: in some auctions, many units of an object are
available (eg, US Treasury bills auctions) and each bidder may
value positively more than one unit. Each bidder chooses therefore
a bid profile (b
1
,b
2
,b
k
) if there are k units to sell. Different auction
mechanisms exist and are characterized by the rule governing the
price paid by the winner:
Discriminatory auction: the price paid for each unit is the
winning bid for that unit
Uniform-price auction: the price paid for each unit is the
same, equal to the highest rejected bid among all the bids for
all unit
Vickrey auction (of the name of Nobel prize): a bidder wins k
objects pays the sum of the k highest rejected bids submitted
by the other bidders.
4. Mixed Strategy
Equilibrium
Game Theory - A (Short) Introduction 95 9/12/2011
4.1. Introduction
4.1.1. Stochastic steady state
Nash Equilibrium in a strategic game: action profile in which every
players action is optimal given every other players action (see def.
23.1)
This corresponds to a steady state of the game:
every players behavior is the same whenever she plays
the game
no player wishes to change her behavior, knowing (from
experience) the other players behavior
In such a framework, the outcome of every play of the game is
the same Nash equilibrium

More general notion of steady state exists

Game Theory - A (Short) Introduction 96 9/12/2011
4.1. Introduction
players choices are allowed to vary:
different members of a given population may choose
different actions, each player choosing the same action
whenever she plays the game
each individual may, on each occasion she plays the
game, choose her action probabilistically according to
the same, unchanging, distribution
these situations are equivalent:
in the first case, a fraction p of the population
representing player i chooses the action a
in the second case, each member of the population
representing player i chooses the action a with
probability p

These notion of (stochastic) steady state of
modeled as mixed strategy Nash equilibrium
Game Theory - A (Short) Introduction 97 9/12/2011
4.1. Introduction
4.1.2 Example: Matching Pennies
Player 1
Player 2
Head
Tail
Head Tail
(1,-1)
(-1,1)
(-1,1)
(1,-1)
Outcomes
The game has no Nash equilibrium: no pair of
action is compatible with a steady state.
Game Theory - A (Short) Introduction 98 9/12/2011
4.1. Introduction
The game has however stochastic steady state in which each
player chooses each of her actions with probability 1/2 :

Suppose that player 2 chooses each of her actions with probability
If player 1 chooses Head with probability p and Tail with probability (1-
p), then:
each outcome (Head,Head) and (Head,Tail) occurs with
probability p x
each outcome (Tail,Head) and (Tail,Tail) occurs with
probability (1-p) x
Thus, the probability that the outcome is either (Head,Head) or
(Tail,Tail) (in which case player 1 wins 1$) p + (1-p) = .
The other two outcomes (Head,Tail) and (Tail,Head) (which correspond
to a loss of 1$) have also probability

Game Theory - A (Short) Introduction 99 9/12/2011
4.1. Introduction
the probability distribution over outcome is independent of p!
every value of p is optimal (in particular )!
the same analysis hold for player 2. We conclude that the
game has a stochastic steady state in which each player
chooses each action with probability .

Moreover (under a reasonable assumption on the players
preferences), the game has no other steady state :
Assumption: each player wants the probability of her gaining
1$ to be as large as possible (maximization of expected profit)
Denote q the probability with which player 2 chooses Head
(she chooses Tail with probability (1-q) )
If player 1 chooses Head with probability p, she gains 1$ with
probability pq + (1-p)(1-q) (outcomes Head,Head or Tail,Tail)
and she looses 1$ with probability (1-p)q + p(1-q).


Game Theory - A (Short) Introduction 100 9/12/2011
4.1. Introduction
Note that:
Player 1 wins 1$ : pq + (1-p)(1-q) = 1-q + p(2q-1)
Player 1 loses 1$:(1-p)q + p(1-q).= q + p (1-2q)
If q < , the first probability (winning 1$) is decreasing in p and
the second probability (loosing 1$) is increasing in p. Player 1
chooses therefore p = 0.
Thus, if player 2 chooses Head with probability less than ,
the best response of player 1 is to choose Tail with certainty.
A similar argument shows that if player 2 chooses Head with
probability superior to , the best response of player 1 is to
choose Head with certainty.
We already have shown that is one player is choosing a given
action with certainty (Nash Equilibrium), there is no steady
state.
Game Theory - A (Short) Introduction 101 9/12/2011
4.1. Introduction
4.1.3 Generalizing the analysis: expected payoffs
The matching pennies case is particularly simple because it has
only two outcomes for each player, allowing to deduce players
preferences regarding lotteries (probability distributions) over
outcomes from their preferences regarding deterministic
outcomes:
if a player prefers a to b and if p > q, he most likely prefers a
lottery in which a occurs with probability p (and b with probability
(1-p)) to a lottery in which a occurs with probability q (and b with
probability (1-q))

To deal with more general cases (eg, more than two outcomes),
we need to add to the model a description of her preferences
regarding lotteries (probability distribution) over outcomes
Game Theory - A (Short) Introduction 102 9/12/2011
4.1. Introduction
The standard approach is to restrict attention to preferences
regarding lotteries (probability distribution) over outcomes that may
be represented by the expected value of a payoff function over
deterministic outcomes:
for every player i, there is a payoff function u
i
, with the
property that player i prefers one probability distribution over
outcomes to another if and only if, according to u
i
, the
expected value of the first probability distribution exceeds the
expected value of the second probability distribution.
eg. :
three outcomes: a, b, c
two prob. dist.: P(p
a
,p
b
,p
c
) and Q(q
a
,q
b
,q
c
)
for each player i, prob. dist. P is preferred to prob. dist. Q if and
only if p
a
u
i
(a) + p
b
u
i
(b) + p
c
u
i
(c) > q
a
u
i
(a) + q
b
u
i
(b) + q
c
u
i
(c)

Preferences that can be represented by the expected value of a
payoff function over deterministic outcomes are called vNM (von
Neumann Morgenstern) preferences.
A payoff function whose expected value represents such
preferences is called a Bernouilli payoff function.
Game Theory - A (Short) Introduction 103 9/12/2011
4.1. Introduction
The restrictions on preferences regarding prob. dist. over
outcomes required for them to be represented by expected
value of a payoff function are NOT innocuous (see violations
example on page 104). They are however commonly accepted
in game theory.

However, these restriction do not restrict player attitudes to risk:
eg. :
suppose that a,b and c are three outcomes. A person prefers a to b to c.
If the person is very averse to risky outcomes, she prefers then to obtain
b for sure rather than to face a prob. dist. in which a occurs with
probability p and c with probability (1-p), even if p is relatively large.
such preferences can be represented by the expected value of a payoff
function u for which u(a) is close to u(b), which is much larger than u(c)
(concave payoff function)
c
b a
u(a)
u(b)
u(c)
(Figure 103.1)
Game Theory - A (Short) Introduction 104 9/12/2011
4.1. Introduction
Note that if the outcomes are amount of money and if the preferences are
represented by the expected value of the amount of money, the player is risk
neutral.

Two classic utility functions: CARA & CRRA

In the reality:
the fact that people buy insurance (the expected payoff is inferior to the
insurance fee) show that economic agents are risk averse.
the fact that people buy lottery tickets shows that, in some circumstance, than
can be risk preferring (small investment, extremely high payoff.
in both cases, the preferences can be represented by the expected value of a
payoff function:
concave in case of risk aversion
convex in case of risk preference

Note finally that given preferences, many different payoff functions can be used to
represented them. It is the ordering that matter.
Game Theory - A (Short) Introduction 105 9/12/2011
4.2 Strategic games in which
players may randomize
Definition 106.1 (Strategic game with vNM preferences)
A strategic game with vNM preferences consists of
a set of players
for each player, a set of actions
for each player, preferences regarding prob. dist. over action
profiles that may be represented by the expected value of a
(Bernoulli) payoff function over action profiles.

Representation: a two-player strategic game with vNM preferences in
which each player has finitely many actions may be represented in a
table like in Chapter 2. However, the interpretation of the number is
different:
in Chapter 2, numbers are values of payoff functions that represent the
players preferences over deterministic outcome
here, numbers are values of (Bernoulli) payoffs whose expected values
represent the players preferences over prob. dist..
Game Theory - A (Short) Introduction 106 9/12/2011
4.2 Strategic games in which
players may randomize
The change is subtle but important (figure 107.1)






The 2 games represent the same game with ordinal preferences (the
prisoners dilemma).
However, the 2 games represent different strategic games with vNM
preferences:
left game: players 1 payoff to (Q,Q) is the same as her expected payoff
to the prob. dist. that yield (F,Q) with probability and (F,F) with
probability
right game: her payoff to (Q,Q) is higher than her expected payoff to this
prob. dist.
Q
F
Q
F
Q F Q F
2,2
3,0
0,3
1,1
3,3
4,0
0,4
1,1
Game Theory - A (Short) Introduction 107 9/12/2011
4.3 Mixed strategy Nash
equilibrium
4.3.1 Mixed strategies
We allow now each player to choose a probability distribution over
her set of actions (rather than restricting her to choose a single
deterministic action)

Definition 107.1 (Mixed strategy)
A mixed strategy of a player in a strategic game is a probability
distribution over the players actions.

Notations:
: profile of mixed strategies (matrix)

i
(a
i
): probability assigned by player is mixed strategy
i
to her
action a
i
Game Theory - A (Short) Introduction 108 9/12/2011
4.3 Mixed strategy Nash
equilibrium
eg: in Matching pennies, the strategy of player 1 that assigns
probability to each action is the strategy
1
(Head)= and

1
(Tail) = .
Shortcut: mixed strategies are often written as a list of
probabilities (one for each action), in the order the actions are
given in the table (see table 107.1).
eg.: ( , ) assigns, in table 107.1, probability to Q and
probability to F.

Note that a mixed strategy may assign probability 1 to a single
action. In that case, such as strategy is referred as a pure
strategy.
Game Theory - A (Short) Introduction 109 9/12/2011
4.3 Mixed strategy Nash
equilibrium
4.3.2 Equilibrium
The mixed strategy Nash equilibrium extend the concept of
Nash equilibrium to the probabilistic setup.

Definition 108.1 (Mixed strategy Nash equilibrium of strategic
game with vNM preferences)
The mixed strategy profiles * in a strategic game with vNM
preferences is a mixed strategy Nash equilibrium if, for each
player i and every mixed strategy
i
of player i, the expected
payoff to player i of * is at least as large as the expected payoff
payoff to player i of (
i
,
-i
*), according to a payoff function
whose expected value represents player is preferences over
prob. dist.

. profile strategy mixed the to payoff expected s ' player is ) ( where
player of strategy mixed every for ), * , ( *) (
o o
o o o o
i U
i U U
i
i i i i i
>
Game Theory - A (Short) Introduction 110 9/12/2011
4.3 Mixed strategy Nash
equilibrium
4.3.3 Best response functions

Notation: B
i
is player is best response function

For strategic game with ordinal preferences: B
i
(a
-i
) is the set of
player is best actions when the list of the other players actions is
a
-i

For a strategic game with vNM preferences, B
i
(
-i
) is the set of
player is best mixed strategies when the list of the other players
mixed strategies is
-i
.
the mixed strategy profile * is a mixed strategy Nash
equilibrium if and only if *
i
is in B
i
(*
-i
) for every player i

eg.: in the Matching Pennies, the set of best responses to a mixed
strategy of the other player is either a single pure strategy or the set of all
mixed strategy.
Game Theory - A (Short) Introduction 111 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Two players two actions games
Player 1 has action T and B
Player 2 has action L and R
u
i
(i=1,2) denotes a Bernoulli payoff function for player i (payoff
over action pair whose expected value represents player is
preferences regarding prob. dist. over action pairs)
Player 1 mixed strategy
1
assigns probability
1
(T) to her
action T (denoted p) and probability
1
(B) to her action B
(denoted 1-p), with
1
(T) +
1
(B) = 1.
Similarly, denotes q the probability that player 2s mixed
strategy assigns to L et 1-q to R.
We take the players choices to be independent (when players
choose the mixed strategies
1
and
2
, the probability of any
action pair (a1,a2) is the product of the corresponding
probabilities assigned by mixed strategies).
Game Theory - A (Short) Introduction 112 9/12/2011
4.3 Mixed strategy Nash
equilibrium








From this probability distribution, we can compute player 1s
expected payoff to the mixed strategy pair (
1
,
2
):


which can be written as:
So, the probabilities of the four outcomes are:
T(p)
B(1-p)
L(q) R(1-q)
pq
(1-p)q
p(1-q)
(1-p)(1-q)
(Figure 109.1)
) , ( ) 1 )( 1 ( ) , ( ) 1 ( ) , ( ) 1 ( ) , (
1 1 1 1
R B u q p L B u q p R T u q p L T u pq + + +
Game Theory - A (Short) Introduction 113 9/12/2011
4.3 Mixed strategy Nash
equilibrium








which can be written more compactly as:
| | | | ) , ( ) 1 ( ) , ( ) 1 ( ) , ( ) 1 ( ) , (
1 1 1 1
R B u q L B u q p R T u q L T u q p + + +
Player 1 expected payoff
when she uses a pure
strategy that assigns
probability 1 to T and player 2
uses a mixed strategy
2
Player 1 expected payoff
when she uses a pure
strategy that assigns
probability 1 to B and player 2
uses a mixed strategy
2
| | | |
2 1 2 1
, ) 1 ( , o o B E p T pE +
Player 1 expected payoff to the mixed strategy pair (
1
,
2
)
as a weighted average of her expected payoffs to T and B
when player 2 uses the mixed strategy
2
, with weights
equal to the probabilities assigned to T and B by
1
.
Game Theory - A (Short) Introduction 114 9/12/2011
4.3 Mixed strategy Nash
equilibrium
In particular, player 1s expected payoff is a linear function of p
0
p 1
| | | |
2 1 2 1
, ) 1 ( , o o B E p T pE +
| |
2 1
,o B E
| | | |
2 1 2 1
, , o o B E T E >
| |
2 1
,o T E
(Figure 110.1)
Game Theory - A (Short) Introduction 115 9/12/2011
4.3 Mixed strategy Nash
equilibrium
A significant implication of this linearity form of the player 1s
expected payoff is that there is only three possibilities for her
best response to a given mixed strategy of player 2:
player 1s unique best response is the pure strategy T (if E
1
(T,
2
) >
E
1
(B,
2
) ): see figure 110.1
player 1s unique best response is the pure strategy B (if E
1
(T,
2
) <
E
1
(B,
2
) ): see figure 110.1 with a downward sloping line
all mixed strategies of player 1 yield the same expected payoff
(hence, all are best response) (if E
1
(T,
2
) = E
1
(B,
2
) ): see figure
110.1 with a horizontal line
in particular, a mixed strategy (p, 1-p) for which 0 < p < 1 is never
a unique best response.

Game Theory - A (Short) Introduction 116 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Example: Matching Pennies revisited
Represent each players preferences by the expected value of a
payoff unction that assigns the payoff 1 to a gain of $1 and the
payoff -1 to a loss of $1. The resulting strategic game with vNM
preferences is (figure 111.1)
Player 1
Player 2
Head
Tail
Head Tail
(1,-1)
(-1,1)
(-1,1)
(1,-1)
Game Theory - A (Short) Introduction 117 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Denote by p the probability that player 1s mixed strategy assigns
to Head and q the probability that player 2s mixed strategy assigns
to Head.

Player 1s expected payoff to pure strategy Head, given player 2
mixed strategy is : q . 1 + (1-q) .(-1) = 2q 1

Her expected payoff to Tail is : q . (-1) + (1-q) . 1 = 1 2q
0 p 1
0
1 q


Player
1

Player
2

(Figure 112.1:
Best response functions)
Game Theory - A (Short) Introduction 118 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Thus:
if q < , player 1 expected payoff to Tail exceeds her
expected payoff to Head (and hence exceeds also her
expected payoff to any mixed strategy that assigns a positive
probability to Head)
similarly, if q > , her expected payoff to Head exceeds her
expected payoff to Tail.
if q = , then both Head and Tail (and all her mixed strategies)
lead to the same payoff.

we conclude that player 1s best responses to player 2s
strategy are her mixed strategy that assigns probability 0 to
Head if q < , her mixed strategy that assigns probability 1 to
Head if q > and all her mixed strategies if q = .

Game Theory - A (Short) Introduction 119 9/12/2011
4.3 Mixed strategy Nash
equilibrium
The best response function of player 2 is similar (see figure 112.1)

The set of mixed strategy Nash equilibria corresponds (as before)
to the set of intersections of the best response functions in figure
112.1.

Matching Pennies has no Nash Equilibrium if players are not
allowed to randomize !
Game Theory - A (Short) Introduction 120 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Exercise 114.2

Find all the mixed strategy Nash equilibria of the strategic
games in Figures 114.2
T
B
T
B
L R L R
6,0
3,2
0,6
6,0
0,1
2,2
0,2
0,1
(Figure 114.2)
Game Theory - A (Short) Introduction 121 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Exercise 114.3
Two people can perform a task if, and only if, they both exert effort. They are both
better off if they both exert effort and perform the task than if neither exerts effort
(and nothing is accomplished); the worst outcome for each person is that she
exerts effort and the other person does not (in which case again nothing is
accomplished). Specifically, the players preferences are represented by the
expected value of the payoff functions in Figure 115.1, which c is a positive
number less than 1 than can be interpreted as the cost of exerting effort. Find all
the mixed strategy Nash equilibria of this game. How do the equilibria change as c
increase? Explain the reasons for the changes.
No Effort
Effort
No Effort Effort
0,0
-c,0
0,-c
1-c,1-c
(Figure 115.1)
Game Theory - A (Short) Introduction 122 9/12/2011
4.3 Mixed strategy Nash
equilibrium
4.3.4 A useful characterization of mixed strategy Nash
equilibrium
The method used up to now to find Mixed strategy Nash equilibria
involves constructing players best response functions. In
complicated games, this method may be intractable. There is a
characterization of mixed strategy Nash equilibria that is an
invaluable tool in the study of generale game.
The key is the following observation: a players expected payoff
to a mixed strategy profile is a weighted average of her
expected payoffs to all pure strategy profiles of the type (a
i
,
-i
),
where the weights attached to each pure strategy (a
i
,
-i
) is the
probability
i
(a
i
) assigned to that strategy a
i
by the players
mixed strategy
i
(see section 4.3.3).
Game Theory - A (Short) Introduction 123 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Symbolically:



where:
A
i
is player is set of actions (pure strategies)
E
i
(a
i
,
-i
) is her expected payoff when she uses the pure strategy
that assign probability 1 to a
i
and every other player j uses her
mixed strategy
j
.

e

=
i i
A a
i i i i i i
a E a U ) , ( ) ( ) ( o o o
Game Theory - A (Short) Introduction 124 9/12/2011
4.3 Mixed strategy Nash
equilibrium
This leads to the following analysis:
Let * be a mixed strategy Nash equilibrium
Denote by E*
i
player is expected payoff in the equilibrium

Because * is an equilibrium, payer is expected payoff, given *
-i
,
to all her strategies (including all her pure strategies), is at most E*
i


But E*
i
is a weighted average of player is expected payoffs to the
pure strategies to which *
i
assigns a positive probability

Thus, player is expected payoffs to these pure strategies are all
equal to E*
i
(if any smaller, then the weighted average would be
smaller!).

Game Theory - A (Short) Introduction 125 9/12/2011
4.3 Mixed strategy Nash
equilibrium
We conclude that:
expected payoff to each action to which *
i
assigns positive
probability is E*
i

the expected payoff to every other action is at most E*
i

Proposition 116.2
A mixed strategy profile * in a strategic game with vNM
preferences in which each player has finitely many actions is a
mixed strategy Nash equilibrium if and only if, for each player i,
the expected payoff, given *
-i
, to every action to which *
i
assigns
a positive probability is the same
the expected payoff, given *
-i
, to every action to which *
i
assigns
a zero probability is at most the expected payoff to any action to
which *
i
assigns a positive probability
Each players expected payoff in an equilibrium is her expected
payoff to any of her actions that she uses with positive
probability
Game Theory - A (Short) Introduction 126 9/12/2011
4.3 Mixed strategy Nash
equilibrium
This proposition allows to check whether a mixed strategy
profile is an equilibrium.

Example 117.1

L(0) C(1/3) R(2/3)
T(3/4)
M(0)
B(1/4)
.,2 3,3 1,1
.,. 0,. 2,.
.,4 5,1 0,7
(Figure 117.1)
Game Theory - A (Short) Introduction 127 9/12/2011
4.3 Mixed strategy Nash
equilibrium
For the game in Figure 117,1 (in which the dots indicate
irrelevant payoffs), the indicated pair of strategies ((3/4,0,1/4)
for player 1 and (0,1/3,2/3) for player 2) is a mixed strategy
Nash equilibrium.

To verify this claim, it suffices, by proposition 116.2, to study
each players expected payoffs to her three pure strategies. For
player 1, these payoffs are:
3
5
0
3
2
5
3
1
:
3
4
2
3
2
0
3
1
:
3
5
1
3
2
3
3
1
:
= +
= +
= +
B
M
T
Game Theory - A (Short) Introduction 128 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Player 1s mixed strategy assigns positive probability to T and B
and probability zero to M. So, the two conditions of proposition
116.2 are satisfied for player 1.

The same verification is easily done for player 2. Note however
that, for player 2, the action L (which she uses with probability
0), has the same expected payoff to her other two actions. This
equality is consistent with proposition 116.2 (no greater than).
Game Theory - A (Short) Introduction 129 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Exercise 117.2 (Choosing numbers)
Players 1 and 2 each choose a positive integer up to K. If the
players choose the same number, then player 2 pays $1 to
player 1; otherwise no payment is made. Each players
preferences are represented by her expected monetary payoff.
Show that the game has a mixed strategy Nash equilibrium in
which each player chooses each positive integer up to K with
probability 1/K
Show that the game has no other mixed strategy Nash equilibria
(Deduce from the fact that player 1 assigns positive probability to
some action k that player 2 must do so; then look at the implied
restriction on player 1s equilibrium strategy)
Game Theory - A (Short) Introduction 130 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Note finally that
an implication of Proposition 116.2 is that a nondegenerate mixed
strategy equilibrium (a mixed strategy equilibrium that is not also a
pure strategy equilibrium) is never a strict Nash equilibrium: every
player whose mixed strategy assigns a positive probability to more
than one action is indifferent between her equilibrium mixed
strategy and every action to which this mixed strategy assigns
positive probability.
The theory of mixed Nash equilibrium does not state that players
consciously choose their strategies at random given the equilibrium
probabilities. Rather, the conditions for equilibrium are designed to
ensure that it is consistent with a steady state. The question of how
a steady state may come about remains to be studied at this stage.


Game Theory - A (Short) Introduction 131 9/12/2011
4.3 Mixed strategy Nash
equilibrium
4.3.5 Existence of equilibrium in finite games

Proposition 119.1 (Existence of mixed strategy Nash
equilibrium in finite games)
Every strategic game with vNM preferences in which each player
has finitely many actions has a mixed strategy Nash equilibrium.

This proposition does not help to find the equilibrium but it is a
useful fact.
Note also that:
the finiteness of the number of actions is a sufficient condition for
the existence of an equilibrium, not a necessary one.
that a players strategy in mixed strategy Nash equilibrium may
assign probability 1 to a single action.
Game Theory - A (Short) Introduction 132 9/12/2011
4.4 Dominated actions
Definition 120.1 (Strict Domination)
In a strategic game with vNM preferences, player is mixed
strategy
i
stricly dominates her action a
i
if:



where u
i
is a Bernoulli payoff function and U
i
(
i
,a
-i
) is player is
expected payoff under u
i
when she uses the mixed strategy
i

and the actions chosen by the other players are given by a
-i
.
actions players' other for the list every for ) , ' ( ) , (
i i i i i i i
a a a u a U

> o
Game Theory - A (Short) Introduction 133 9/12/2011
4.4 Dominated actions
An action not strictly dominated by any pure strategy may be
strictly dominated by a mixed strategy (see Figure 120.1)
T
M
L R
1
4
1
0
(Figure 120.1)
B 0 3
The action T of player 1 is not strictly (or weakly) dominated
by M or B, but it is strictly dominated by the mixed strategy
that assigns probability to M and probability to B.
Game Theory - A (Short) Introduction 134 9/12/2011
4.4 Dominated actions
Exercise 120.2 (Strictly dominated mixed strategy)
In Figure 120.1, the mixed strategy that assigns probability to
M and to B is not the only mixed strategy that strictly
dominates T. Find all the mixed strategy that do so.

Exercise 120.3 (Strict domination for mixed strategies)
Determine whether each of the following statements is true of
false:
A mixed strategy that assigns positive probability to a strictly
dominated action is strictly dominated.
A mixed strategy that assigns positive probability only to actions
that are not strictly dominated is not strictly dominated.
Game Theory - A (Short) Introduction 135 9/12/2011
4.4 Dominated actions
A strictly dominated action is not a best response to any
collection of mixed strategies of the other players
Suppose that player is action a
i
is strictly dominated by her mixed
strategy
i
Player is expected payoff U
i
(
i
,
-i
) when she uses the mixed strategy

i
and the other players use the mixed strategies
-i
is a weighted
average of her payoffs U
i
(
i
,a
-i
) as a
-i
varies over all the collections of
action for the other players, with the weight on each a
-i
equal to the
probability with which it occurs when the other players mixed
strategies are
-i
.
Player is expected payoff when she uses the action a
i
and the other
players use the mixed strategies
-i
is a similar weighted average; the
weights are the same but the terms take the form u
i
(a
i
,a
-i
), rather than
U
i
(
i
,a
-i
).
The fact that a
i
is strictly dominated by
i
means that U
i
(
i
,a
-i
) >
u
i
(a
i
,a
-i
) for every collection a
-i
of the players actions.
Hence player is expected payoff when she uses the mixed strategy
i
exceeds her expected payoff when she uses the action a
i
, given
-i
.

Game Theory - A (Short) Introduction 136 9/12/2011
4.4 Dominated actions
Consequently, a strictly dominated action is not used with
positive probability in any mixed strategy Nash equilibrium.

Definition 121.1 (Weak domination)
In a strategic game with vNM preferences, player is mixed
strategy
i
weakly dominates her action a
i
if:

and

where u
i
is a Bernoulli payoff function and U
i
(
i
,a
-i
) is player is
expected payoff under u
i
when she uses the mixed strategy
i

and the actions chosen by the other players are given by a
-i
.
actions playes' other the of list every for ) , ' ( ) , (
i i i i i i i
a a a U a U

> o
actions playes' other the of list some for ) , ' ( ) , (
i i i i i i i
a a a U a U

> o
Game Theory - A (Short) Introduction 137 9/12/2011
4.4 Dominated actions
Note that, as a weakly dominated action may be used in a Nash
equilibrium, a weakly dominated action may be used with a
positive probability in a mixed strategy equilibrium. We can
therefore not eliminate weakly dominated actions from
consideration when finding mixed strategy equilibrium.

However:

Proposition 122.1 (Existence of mixed strategy Nash
equilibrium with no weakly dominated strategies in finite games)
Every strategic game with vNM preferences in which each
player has finitely many actions has a mixed strategy Nash
equilibrium in which no players strategy is weakly dominated.
Game Theory - A (Short) Introduction 138 9/12/2011
4.5 Pure equilibria when
randomization is allowed
Equilibria when the players are not allowed to randomize
remain equilibria when they are allowed to randomize

Proposition 122.2 (Pure strategy equilibria survive when
randomization is allowed)
Let a* be a Nash equilibrium of G and for each player i, let *
i

be the mixed strategy of player i that assigns probability one to
the action a*
i
. Then * is a mixed strategy Nash equilibrium of
G.




Game Theory - A (Short) Introduction 139 9/12/2011
4.5 Pure equilibria when
randomization is allowed
Any pure equilibria that exist when the players are allowed to
randomize are equilibria when they are not allowed to
randomize.

Proposition 123.1 (Pure strategy equilibria survive when
randomization is prohibited)
Let * be a mixed strategy Nash equilibrium of G in which the
mixed strategy of each player i assigns probability one to the
single action a*
i
. The a* is a Nash equilibrium of G.

Game Theory - A (Short) Introduction 140 9/12/2011
4.5 Pure equilibria when
randomization is allowed
To establish these two propositions, let N be a set of players
and let A
i
, for each player i, be a set of actions.

Consider the following two games:
G: the strategic game with ordinal preferences in which the set of
players is N, the set of actions of each player i is A
I
, and the
preferences of each player i are represented by the payoff function
u
i

G: the strategic game with vNM preferences in which the set of
players is N, the set of actions of each player i is A
i
, and the
preferences of each player i are represented by the expected
value of u
i
Game Theory - A (Short) Introduction 141 9/12/2011
4.5 Pure equilibria when
randomization is allowed
Proposition 122.2
Let a* be a Nash equilibrium of G, and for each player i let *
i

be the mixed that assigns probability 1 to a*
i
. Since a* is a Nash
equilibrium of G, we know that in G no player i has an action
that yields her a payoff higher than does a*
i
when all other
players adhere to *
-i
. Thus * satisfies the two conditions in
Proposition 116.2. So, it is a mixed strategy equilibrium of G.
Proposition 123.1
Let * be a mixed strategy Nash equilibrium of G in which every
players mixed strategy is pure. For each player i, denote a*
i
the
action to which
i
assigns probability one. Then, no mixed
strategy of player i yields her a payoff higher than does *
i.
Thus
a* is Nash equilibrium of G.

Game Theory - A (Short) Introduction 142 9/12/2011
4.7 Equilibrium in a single
population
Definition 129.1 (Symmetric two-player strategic game with
vNM preferences)
A two-player strategic game with vNM preferences is symmetric
if the players sets of actions are the same and the players
preferences are represented by the expected values of payoff
functions u
1
and u
2
for which u
1
(a
1
,a
2
) = u
2
(a
2
,a
1
) for every
action pair (a
1
,a
2
).

Definition 129.2 (Symmetric mixed strategy Nash equilibrium)
A profile * of mixed strategies in a strategic game with vNM
preferences in which each player has the same set of actions is
a symmetric mixed strategy Nash equilibrium if it is a mixed
strategy Nash equilibrium and *
i
is the same for every player i.

Game Theory - A (Short) Introduction 143 9/12/2011
4.7 Equilibrium in a single
population
Game of approaching pedestrian (Figure 129.1)







This game has two deterministic steady states ( (Left,Left) and
(Right,Right) ), corresponding to the two symmetric Nash equilibria in
pure strategies.
The game has also a symmetric mixed strategy Nash equilibrium, in
which each player assigns probability to Left and probability to
Right.
This equilibrium corresponds to a steady state in which half of all
encounters result in collisions!
Left
Right
Left Right
1,1
0,0
0,0
1,1
(Figure 115.1)
Game Theory - A (Short) Introduction 144 9/12/2011
4.7 Equilibrium in a single
population
Exercise 130.3 (Bargaining)
Pairs of players from a single population bargain over the division of a
pie of size 10. The members of a pair simultaneously make demands.
The possible demands are nonnegative even integers up to 10.
If the demands sum to 10, then each player receives her demand. If the
demands sum to less than 10, then each player receives her demand
plus half of the pie that remains after both demands have been
satisfied. If the demands sum to more than 10, then neither player
receives any payoff.

Find all the symmetric mixed strategy Nash equilibria in which each
player assigns positive probability to at most two demands (many
situations in which each player assigns positive probability to two
actions says a and a can be ruled out as equilibria because when
one player uses such strategy, some action yields the other player a
payoff higher than does one or both of the actions a and a).
Game Theory - A (Short) Introduction 145 9/12/2011
4.9 The formation of players
beliefs
In a Nash equilibrium, each player chooses a strategy that
maximizes her expected payoff, knowing the other players
strategies.
The idea underlying the previous analysis is that the players
have learned each others strategies from their experience
playing the game.
The idealized situation is the following:
for each player in the game, there is a large population of
individuals who may take the role of that player
in any play of the game, one participant is drawn randomly from
each population
In this situation, a new individual who joins a population can
learn the other players strategies by observing their actions
over many plays of the game.
Game Theory - A (Short) Introduction 146 9/12/2011
4.9 The formation of players
beliefs
As long as the number of new players is low, existing players
encounters with neophytes (who may use nonequilibrium
strategies) will be sufficiently rare that their beliefs about the
steady state will not be disturbed. So, a new players problem is
simply to learn the other players actions.

But, what might happen if new players simultaneously join more
than one population in sufficient numbers, such that the
probability that they encounter is not anymore small? In
particular, can we expect a steady state to be reached if no one
has experience?
Game Theory - A (Short) Introduction 147 9/12/2011
4.9 The formation of players
beliefs
4.9.1 Eliminating dominated actions
In some games, players may reasonably be expected to choose
their Nash equilibrium actions from an introspective analysis of
the game:
At the extreme (eg., the Prisoners Dilemma), players best action
are independent of the other players actions.
In a less extreme case, some players best action may depend on
the other players actions, but the actions the other players will
choose may be clear because each of these players has an action
that strictly dominates all others.
Game Theory - A (Short) Introduction 148 9/12/2011
4.9 The formation of players
beliefs
eg.: in the game in Figure 135.1, player 2s action R strictly
dominates L. So, no matter what player 2 thinks player 1 will be
playing, she should choose R. Consequently, player 1, who can
deduce by this argument that player 2 will choose R, may
reason that she should choose B, even without any expercience
of the game.
T
B
L R
1,1
0,0
0,0
1,1
(Figure 135.1)
Game Theory - A (Short) Introduction 149 9/12/2011
4.9 The formation of players
beliefs
4.9.2 Learning
Another approach to the question of how a steady state might be
reached assumes that player learns:
she starts with an unexplained prior belief about the other players
actions
she changes these beliefs in response to information she receives
Two theories are:
Best Response Dynamics: a simple theory assumes that in each
period after the first, each player believes that the other players will
choose the actions the chose in the previous period:
at the first period, each player chooses a best response to an arbitrary
deterministic belief about the other players actions
in every subsequent period, each player chooses a best response to the
other players action in the previous period
An action profile that remains the same from period to period (steady
state) is then a pure Nash equilibrium of the game. The two questions
are then:
does the game convergence to a steady state?
how long does it take to converge?
Game Theory - A (Short) Introduction 150 9/12/2011
4.9 The formation of players
beliefs
eg.: the BoS game (example 18.2), for some initial beliefs,
does not converge:
if player 1 initially believes that player 2 will choose
Stravinsky and player 2 believes that player 1 will
initially choose Bach, then the players choices will
subsequently alternate in definitively between the
action pairs (Bach, Stravinsky) and
(Stravinsky,Bach).
Fictitious play: under the Best Response Dynamics, a players
belief does not admit the possibility that her opponents actions are
realizations of mixed strategies. Under the Fictitious play theory,
players consider actions in all the previous periods when forming a
belief about their opponents strategies. They treat these actions as
realizations of mixed strategies:
each player begins with an arbitrary probabilistic belief about
the other players action
then, in any period, she adopts the belief that her opponent is
using a mixed strategy in which the probability of each action is
proportional to the frequency with which her opponent chose
that action in the previous periods.
Game Theory - A (Short) Introduction 151 9/12/2011
4.9 The formation of players
beliefs
Note that:
in any two-players game in which the player has two
actions (eg, the Matching Pennies), this process
converges to a Mixed strategy Nash equilibrium from any
initial beliefs;
for other games, there are initial beliefs for which the
process does not converge.
Game Theory - A (Short) Introduction 152 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
The following systematic method can be used to find all mixed strategy
Nash equilibria of a game:
For each player i, choose a subset S
i
of her set of A
i
of actions
Check whether there exists a mixed strategy profile such that:
(i) the set of actions to which strategy
i
assigns positive probability is S
i
(ii) satisfies the conditions of Proposition 116.2
Repeat the analysis for every collection of subsets of the players sets
of actions.

The shortcoming of the method is that for games in which each player
has several strategies, or in which there are several players, the
number of possibilities to examine is huge. In a two player game in
which each player has three actions:
each players set of action has seven non-empty subset (three actions
consisting of a single action, three consisting of two actions, and one
consisting of the entire set of actions).
so that there are 49 (7x7) possible collections of subsets to check.
Game Theory - A (Short) Introduction 153 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
Example 138.1: Finding all mixed strategy equilibria of a two-
player game in which each player has two actions.






Denote the actions and payoffs as in Figure 139.1.
Each players set of actions has three nonempty subsets:
two each consisting of a single action
one consisting of both action
Thus, there are nine (3x3) pairs of subsets of the players action
sets.
T
B
L R
u
11
,v
11
u
21
,v
21
u
12
,v
12
u
22
,v
22
(Figure 139.1)
Game Theory - A (Short) Introduction 154 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
For each pair (S
1
,S
2
), we check if there is a pair (
1
,
2
) of mixed
strategies such that each strategy
i
assigns positive probability
only to actions in S
i
and the conditions in Proposition 116.2 are
satisfied:
checking the four pairs of subsets in which each players
subset consists of a single action amounts to checking whether
any of the four pairs of actions is a pure strategy equilibrium.
consider the pair of subsets {T,B} for player 1 and {L} for player
2:
the second condition in Proposition 116.2 is automatically
satisfied for player 1 (she has no actions to which she
assigns probability 0)
the first condition in Proposition 116.2 is automatically
satisfied for player 2 (she assigns positive probability only
to one action).

Game Theory - A (Short) Introduction 155 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
Thus, for there to be a mixed strategy equilibrium in which
player 1s probability of using T is p, we need:
u
11
= u
21
: player 1s payoffs to her two actions must be
equal
p v
11
+(1-p) v
21
p v
12
+(1-p) v
22
: L must be at least as
good as R given player 1s mixed strategy.
A similar argument applies to the three other pairs of subsets
in which one players subset consists of both her actions and
the other players subset consists of a single action.
To check finally whether there is a mixed strategy equilibrium
in which the subsets are {T,B} for player 1 and {L,R} for player
2, we need to find a pair of mixed strategy that satisfied the
first condition of Proposition 116.2 (the second condition being
automatically satifisfied, no action having 0 probability). That is,
we need to find p and q such as:
q u
11
+ (1-q) u
12
= q u
21
+ (1-q) u
22
p v
11
+ (1-p) v
21
= p v
21
+ (1-p) v
22
Game Theory - A (Short) Introduction 156 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
Example 139.2: Find all mixed strategy equilibria of a variant of
BoS






Exercise 141.1: Find all mixed strategy equilibria of a two-
player game
B
S X B
S
4,2
0,0 0,1
0,0
2,4 1,3
(Figure 139.2)
T
M R L
B
2,2
0,3 1,3
3,2
1,1 0,2
(Figure 141.1)
Game Theory - A (Short) Introduction 157 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
Exercise 142.1: Find the mixed strategy Nash equilibria of the
three player game in Figure 142.1 (each player has two actions)
A
B
A B
1,1,1

0,0,0

0,0,0

0,0,0

(Figure 142.1)
Game Theory - A (Short) Introduction 158 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
Consider now the case of a continuum of actions: the principle involved
in finding mixed strategy equilibria of games are the same as for games
with finitely many actions, though the techniques are different.
In a game in which a player has a continuum of actions, a mixed
strategy of a player is determined by the probabilities it assigns to sets
of actions.
Proposition 116.2 becomes
Proposition 142.2 (Characterization of mixed strategy Nash
equilibrium)
A mixed strategy profile * in a strategic game with vNM preferences is
a mixed strategy Nash equilibrium if and only if, for each player i,
*
i
assigns probability zero to the set of actions a
i
for which the action
profile (a
i
,*
-i
) yields player i an expected payoff less than her expected
payoff to *.
for no action a
i
does the action profile (a
i
,a*
-i
) yield player i an expected
payoff greater than her expected payoff to *.

Game Theory - A (Short) Introduction 159 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
Games with continuum of actions can be very complex to
analyze. A significant class of games consist of games in which
each players set of actions is a one-dimensional interval of
numbers:
Consider such a game with two players
Let player is set of actions be the interval from a
-i
to a
+i
, for i=1,2
Identify each players mixed strategy with a cumulative probability
distribution of this interval: the mixed strategy of each player i is a
nondecreasing function F
i
for which 0F
i
(a
i
)1, for every action a
i
.
The number F
i
(a
i
) is the probability that player is action is at most
a
i
.
the form of a mixed strategy Nash equilibrium in such a game can
be very complex but some such games, however, have equilibria of
a particularly simple form, in which each players equilibrium mixed
strategy assigns probability zero except in an interval.

Game Theory - A (Short) Introduction 160 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
The mixed strategies (F
1
,F
2
) satisfies the following conditions for
i=1,2:
There are numbers x
i
and y
i
such that player is mixed strategy
F
i
assigns probability zero except in the interval from x
i
to y
i
:
F
i
(z)=0 for z<x
i
, and Fi(z)=1, for z y
i
.
Player is expected payoff when her action is a
i
and the other
player uses her mixed strategy F
j
takes the form:
= c
i
for x
i
a
i
yi
c
i
for a
i
< x
i
and a
i
> y
i

where c
i
is a constant.


Game Theory - A (Short) Introduction 161 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
Example 143.1 (All-pay auction)
Two people submit sealed bid for an object worth $K for each of
them. Each persons bid may be any nonnegative number up to
$K. The winner is the person whose bid is higher. In the event
of a tie, each person receive half of the object (which she
values at $K/2). Each person pays her bid, regardless of
whether she wins, and has preferences represented by the
expected amount of money the receives.
Game Theory - A (Short) Introduction 162 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
This situation may be modeled by the following strategic game:
Players: the two bidders
Actions: each players set of actions is the set of possible bids
(nonnegative numbers up to K)
Payoff functions: Each player is preferences are represented by
the expected value of the payoff function given by:

>
=
<
=
j i i
j i i
j i i
i
a a a K
a a a
K
a a a
a a u
if
if
2
if
) , (
2 1
eg.: a competition among two firms to develop a new product by
some deadline, where the firm that spends the most develops a
better product, which capture the entire market.
Game Theory - A (Short) Introduction 163 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
An all-pay auction has no pure strategy Nash equilibrium, by the
following argument:
No pair of actions (x,x) with x < K is a Nash equilibrium because
either player can increase her payoff by slightly increasing her bid
(K,K) is not a Nash equilibrium because either player can increase
her payoff from K/2 to 0 by reducing her bid to 0
No pair of actions (a
1
,a
2
) with a
1
a
2
is a Nash equilibrium because
the player whose bid is higher can increase her payoff by reducing
her bid (and the player whose bid is lower can, if her bid is positive,
increase her payoff by reducing her bid to 0)


Game Theory - A (Short) Introduction 164 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
Consider the possibility that the game has mixed strategy Nash
equilibrium. Denote F
i
the mixed strategy (cumulative density
function over the interval of possible bids).
We look for an equilibrium in which neither mixed strategy
assigns positive probability to any single bid (there are infinitely
many possible bids and for continuous random variables,
Prob(x=c)=0).
In that case, F
i
(a
i
) is the probability that player i bids at most a
i

and the probability that she bids less than a
i
.
We restrict our attention to strategy pairs (F
1
,F
2
) for which, for
i=1,2, there are numbers x
i
and y
i
such that F
i
assigns positive
probability only to the interval form x
i
to y
i
.
Game Theory - A (Short) Introduction 165 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
To investigate the possibility of such an equilibrium, consider
player 1s expected payoff when she uses the action a
1
, given
player 2s mixed strategy F
2
:
if a
1
< x
2
, then a
1
is less than player 2s bid with probability one, so that
player 1s payoff is a
1
if a
1
> y
2
, then a1 exceeds player 2s bid with probability one, so that
player 1s payoff is K-a
1
if x
2
a
1
y
2
, then player 1s expected payoff is:
with probability F
2
(a
1
), player 2s bid is less than 1, in which case player
1s payoff is K-a
1
with probability 1-F
2
(a
1
), player 2s bid exceeds a1, in which case player
1s payoff is a
1
by assumption, the probability that player 2s bid exactly equal to a
1
is
zero
Player 1 expected payoff is (K-a
1
) F
2
(a
1
) + (-a
1
) (1-F
2
(a
1
)) = KF2(a
1
)-a
1
Game Theory - A (Short) Introduction 166 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
We need to find values of x1 and y1 and a strategy F2 such that
player 1s expected payoff satisfies condition of Proposition
142.2
a
-1
a
+1
x
1
y
1
c
1
(Figure 144.1)
it is a constant on the interval x
1
to y
1
and less than this
constant outside this interval.
Game Theory - A (Short) Introduction 167 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
The conditions are therefore:
K F
2
(a
1
)-a
1
=c
1
for x
1
a
1
y
1
for some constant c
1
F
2
(x
2
) = 0 and F
2
(y
2
)=1
F
2
must be non decreasing (it is a CDF)
and analogous conditions must hold for x
2
,y
2
, and F
1
.

Solution: we see that if x
1
= x
2
= 0, y
1
= y
2
= K, and F
1
(z) = F
2
(z)
= z/K for all z with 0 z K, these conditions are fulfilled. We
see that each player expected payoff is constant and equal to 0,
for all her actions.

Thus, the game has a mixed strategy Nash equilibrium in which
each player randomizes uniformly over all her actions. Proving
that it is the only mixed strategy Nash equilibrium is more
complex.
Game Theory - A (Short) Introduction 168 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
4.12.1 Expected payoffs
Suppose that a decision-marker has preferences over a set of
deterministic outcomes and that each of her actions results in a
lottery (probability distribution) over these outcomes
To determine the action she chooses, we need her preferences
over lotteries
We cannot derive these preferences form her preferences over
deterministic outcomes. So, assume we are given preferences over
lotteries.
Under fairly week assumptions, we can represent these
preferences by a payoff function: we can find a function, say U,
over lotteries (p
1
, p
k
) such that U(p
1
,p
k
) > U(p
1
, p
k
) only if
the decision marker prefers (p
1
, p
k
) to (p
1
, p
k
), where each
outcome occurs with probability p
i
.
Game Theory - A (Short) Introduction 169 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
In most case, we need however more structure to go farther in the
analysis. The standard approach, developed by von Neumann and
Morgenstern (1994), is to impose an additional assumption (known as the
independence assumption) that allows us to conclude that the decision-
makers preferences can be represented by the expected payoff function.
Under this assumption, there is a payoff function u over deterministic
outcomes such that the decision-makers preference relation over lotteries
is represented by the function:



where a
k
is the kth outcome of the lottery and:



if and only if the decision-maker prefers the lottery (p
1
,p
k
) to (p
1
,p
k
).

=
=
K
k
k k K
a u p p p U
1
1
) ( ) ,... (

= =
>
K
k
k k
K
k
k k
a u p a u p
1 1
) ( ' ) (
Game Theory - A (Short) Introduction 170 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
This sort of payoff function (for which the decision-maker
preferences are represented by the expected value of the
payoffs) is known as Bernoulli payoff functions.

eg.: suppose that there are three possible deterministic
outcomes: the decision-maker may receive $0, $1 or $5 (and
naturally prefers $5 to $1 to $0). Suppose that she prefers the
lottery (1/2,0,1/2) to the lottery (0,3/4,1/4), where probabilities
are given for outcomes $0, $1 and $5. This preference is
consistent with preferences represented by the expected value
of a payoff function u for which u(0)=0, u(1)=1 and u(5)=4:
4 .
4
1
1 .
4
3
4 .
2
1
0 .
2
1
+ > +
Game Theory - A (Short) Introduction 171 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
The great advantage of Bernoulli payoff function is that preferences are
completely specified by the payoff function: once we know u(a
k
) for each
possible outcome a
k
, we know the decision-maker preferences among
all lotteries.
Bernoulli payoff function must however not be confused with payoff
function that represents the decision-markers preferences over
deterministic outcomes:
if u is a Bernoulli payoff function, it certainly is a payoff function that
represents the decision-makers preferences over deterministic
outcomes
however, the converse is not true.
eg. : suppose a decision-maker prefers $5, $1 and $0 and prefers lottery
(1/2,0,1/2) to (0,3/4,1/4). Defines u as u(0)=0, u(1)=3 and u(5)=4. u is
compatible with preferences over deterministic outcomes. However, it is
not compatible with preferences over lotteries:
4 .
4
1
3 .
4
3
4 .
2
1
0 .
2
1
+ < +
Game Theory - A (Short) Introduction 172 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
4.12.2 Equivalent Bernoulli payoff functions
Lemma 148.1 (Equivalence of Bernoulli payoff functions)
Suppose that there are at least three possible outcomes. The
expected values of the Bernoulli payoff functions u and v
represent the same preferences over lotteries if and only if there
exist number k and m (with m > 0) such that u(x) = k + m v(x), for
all x.

Exercise 149.2 (Normalized Bernoulli payoff functions)
Suppose that a decision-markers preferences can be
represented by the expected value of the Bernoulli payoff
function u. Find a Bernoulli payoff function whose expected
value represents the decision-makers preferences and assigns
a payoff of 1 to the best outcome and a payoff of 0 to the worst
outcome.
Game Theory - A (Short) Introduction 173 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
4.12.3 Equivalent strategic games with vNM preferences







the three games of figure 150.1 represents the same strategic game
with deterministic preferences
only the left and middle tables represent the same strategic game
with vNM preferences. The reason is that the payoff functions in the
middle table are linear functions of the payoff functions in the left
table, whereas the payoff fonctions in the right table are not.
B
S
B S
1,1
0,0
0,0
1,1
B
S
B S
1,1
0,0
0,0
1,1
(Figure 150.1)
B
S
B S
1,1
0,0
0,0
1,1
Game Theory - A (Short) Introduction 174 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
Denotes ui, vi, wi the Bernoulli payoff functions of the three games.
Then v
1
(a)=2u
1
(a) and v
2
(a)=-3+u
2
(a). But, w
1
is not a linear
function of u
1
. There is no constant and such that w
1
(a) = +
u
1
(a):





has no solution.


2 3
1 1
0 0
u
u
u
+ =
+ =
+ =
Game Theory - A (Short) Introduction 175 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
Exercise 150.1 (Games equivalent to the Prisoners Dilemma)
Which of the right tables in Figure 150.2 represents the same
strategic game with vNM preferences as the Prisoners
Dilemma as specified in the left panel?
C
D
C D
2,2
3,0
0,3
1,1
C
D
C D
3,3
4,0
0,4
2,2
(Figure 150.2)
C
D
C D
6,0
9,-4
0,2
3,-2
9. Bayesian Games
Game Theory - A (Short) Introduction 177 9/12/2011
Framework
An assumption underlying the notion of Nash equilibrium is that
each player holds the correct belief about the other players
actions. To do so, the player must know the other player
preferences.
However, in many situation, players are not perfectly informed
about their opponents characteristics (eg.: firms may not know
each others cost functions).
In this chapter, we generalize the notion of strategic game to
allow the analysis of situations in which each player is
imperfectly informed about an aspect of her environment that
is relevant to her choice of action.
Game Theory - A (Short) Introduction 178 9/12/2011
9.1 Motivational examples
With start with one example to illustrate main ideas of Bayesian
games
Example 273.1 (Variant of BoS with imperfect information)
Consider a variant of BoS in which player 1 is unsure whether
player 2 prefers to go out with her or prefers to avoid her,
whereas player 2 (as before) knows player 1s preferences.
B
S
B S
2,1
0,0
0,0
1,2
2 wishes to meet 1
B
S
B S
2,0
0,1
0,2
1,0
2 wishes to avoid 1
2 2
1
Prob 1/2 Prob 1/2
(Figure 274.1)
Game Theory - A (Short) Introduction 179 9/12/2011
9.1 Motivational examples
Specifically, suppose player 1 thinks that with probability player 2
wants to go out with her, and with probability player 2 wants to
avoid her (see figure 274.1)
Because probabilities are involved, an analysis of the situation
requires us to know the players preferences over lotteries.
We can represent the situation as being with two states. In state 1,
the Bernoulli payoff are given in the left table. In state 2, the
Bernoulli payoff are given in the right table. Player assigns
probability to each state.
The notion of Nash equilibrium must be generalize to this new
setting:
from player 1s point of view, player 2 has two possible types
(one whose preferences are given by the left table of Figure
274.1 and the other, by the right table).
Game Theory - A (Short) Introduction 180 9/12/2011
9.1 Motivational examples
Player 1 does not know player 2 types. So, to choose an action
rationally, she needs to form a belief about the action of each
player 2 type.
Given these beliefs and her belief about the likelihood of each
type, she can calculate her expected payoff to each of her
actions.
For example, if player 1, conditionally on choosing B, thinks
that type 1 of player 2 will choose B and type 2 of player 2 will
choose S, then she thinks that B will yield her a payoff of 2 with
probability and of 0 with probability . So, in this case, her
expected payoff to B is ( . 2 + . 0)=1. Similar calculations
lead to table 275.1.
(B,B) (B,S) (S,B) (S,S)
Type 1 player 2 choice Type 2 player 2 choice
S
B 2 1 1 0
0 0
(Figure 275.1)
Game Theory - A (Short) Introduction 181 9/12/2011
9.1 Motivational examples
For this situation, we define a pure strategy Nash equilibrium to
be a triple of actions (one for player 1 and one for each type of
player 2) with the property that:
the action of player 1 is optimal, given the actions of the two
types of player 2 (and player 1s belief about the state)
the action of each type of player 2 is optimal, given the action
of player 1

Note that in a Nash equilibrium:
player 1s action is a best response in Figure 275.1 to the pair
of actions of the two types of player 2
the action of the type of player 2 who wishes to meet player 1
is a best response in the left table of Figure 274.1
the action of the type of player 2 who wished to avoid player 1
is a best response in the right table of Figure 274.1 to the
action of player 1
Game Theory - A (Short) Introduction 182 9/12/2011
9.1 Motivational examples
Why should player 2, who knows his own type, have to plan
what to do in both cases?
She does not!
However, as an analysts, we need to consider what she would do
in both cases. The reason is that to determine her best action,
player 1, who does not know player 2 type, needs to form a belief
about the action each type of player 2 would take, and we wish to
impose the equilibrium condition that these beliefs are correct.
(B,(B,S)) is a Nash equilibrium where
)) , ( , ( S B B
Player 1
Player 2 Type 1
Player 2 Type 2
Game Theory - A (Short) Introduction 183 9/12/2011
9.1 Motivational examples
Proof:
given that the action of the two types of player 2 are (B,S), player
1s action B is optimal (see Figure 275.1)
given that player 1 chooses B, B is optimal for player 2 type 1 and
S is optimal for player 2 type 2 (see Figure 274.1)

We interpret the equilibrium as follows:
Type 1 - player 2 chooses B and type 2 player 2 chooses S,
inferring that player 1 will choose B
player 1, who does not know if player 2 is of type 1 or of type 2,
believes that if player 2 is of type 1, she will choose B and if player
2 is of type 2, she will choose S.
Game Theory - A (Short) Introduction 184 9/12/2011
9.1 Motivational examples
We can interpret the actions of the two types of player 2 to
reflect player 2s intentions in the hypothetical situation before
she knows the state. This corresponds to the following situation:
initially, player 2 does not know the state; she will be informed by a
signal that depends on the state;
before receiving this signal, she plans an action for each possible
state;
the same story is valid for player 1 but player 1 will receive an
uninformative signal (same signal in each state)

Note that in such a setup, a Nash equilibrium is list of
actions, one for each type of each player, such that the action
of each type of each player is a best response to the
actions of all the types of the other player, given the players
beliefs about the state after she observes her signal.
Game Theory - A (Short) Introduction 185 9/12/2011
9.1 Motivational examples
Exercise 276.1 (Equilibria of a variant of BoS with imperfect
information)
(i) Show that there is no pure strategy of this game in which
player 1 chooses S.
(ii) Find the mixed strategy Nash equilibria of the game (First
check whether there is an equilibrium in which both types of
player 2 use pure strategies; then look for equilibria in which
one or both of these types randomize).
Game Theory - A (Short) Introduction 186 9/12/2011
9.2 General definitions
9.2.1 Bayesian games
A strategic game with imperfect information is called a Bayesian
game.
A key component in the specification of the imperfect information is
the set of state: each state is a complete description of one
collection of the players relevant characteristics (preferences,
information, ). For every collection of characteristics that some
player believes to be possible, their must be a state.
A the start of the game a state is realized. The players do not
observe this state. Rather, each player receives a signal that may
give her some information about the state. We denote the signal
player i receives in state by
i
(). The function
i
(.) is called the
player is signal function. Note that this is a deterministic function:
for each state, a given signal is received.

Game Theory - A (Short) Introduction 187 9/12/2011
9.2 General definitions
The state that generates any given signal t
i
is said to be consistent
with the signal t
i
.
The size of the set of states consistent with each player is signal
reflect the quality of player is information. The two extreme cases
are:
if
i
() is different for each value of , the player i knows,
given her signal, the state that has occurred: she is perfectly
informed about all the players relevant characteristics.
if
i
() is the same for all states, then player is signal conveys
no information: she is perfectly uninformed.
We refer to player i in the event that she receives t
i
as type t
i
of
player i. Each type of each player holds a belief about the
likelihood of the states is consistent with her signal (eg.: if t
i
=
i
(
1
)=

i
(
2
), then type t
i
of player i assigns probabilities to
1
and
2
.

Game Theory - A (Short) Introduction 188 9/12/2011
9.2 General definitions
Each player (may) care about the actions chosen by the other
players and about the state. We need therefore to specify their
preferences regarding probability distribution over pairs (a, ),
consisting of action profile a and a state .
We assume that each players preferences over such probability
distributions are represented by the expected value of a Bernoulli
function. We therefore specify player is preferences by giving the
Bernoulli payoff function u
i
over pair (a, ).
Game Theory - A (Short) Introduction 189 9/12/2011
9.2 General definitions
Definition 279.1 (Bayesian game)
A Bayesian game consist of
a set of players
a set of states
and for each player
a set of actions
a set of signals that she may receive and a signal function that
associates a signal with each state
for each signal that she may receive, a belief about the states
consistent with the signal (a probability distribution over the set of
states with which the signal is associated)
a Bernoulli payoff function over pairs (a,), where a is an action
profile and is a state, the expected value of which represents the
players preferences.
Game Theory - A (Short) Introduction 190 9/12/2011
9.2 General definitions
Note that the set of actions of each player is independent of the
state: each player may care about the state, but the set of
actions available to her is the same in every state.

Application to Example 273.1
players: the pair of people
states: {meet, avoid}
actions: for each player {B,S}
signals: player 1 may receive a single signal z. Her signal function
1

satisfies
1
(meet)=
1
(avoid)=z. Player 2 receives one of two signals (m
and v). Her signal function
2
satistifies
2
(meet)=m and
2
(avoid)=v.
beliefs: player 1 assigns probability to each state after receiving the
signal z. Player 2 assigns probability 1 to state meet after receiving the
signal m, and probability 1 to state avoid after receiving the signal v.
payoffs: the payoffs u
i
(a,meet) of each player i for all possible action
pairs are given in the left panel of Figure 274.1. (for u
i
(a,avoid) in the
right panel).
Game Theory - A (Short) Introduction 191 9/12/2011
9.2 General definitions
9.2.2 Nash equilibrium
In a Bayesian game, each player chooses a collection of actions:
one for each signal she may receive (each type of each player
chooses an action).
In a Nash equilibrium of such a game, the action chosen by each
type of each player is optimal, given the actions chosen by every
type of every other player.

We define a Nash equilibrium of Bayesian game to be a Nash
equilibrium of a strategic game in which each player is one of the
types of one of the players in the Bayesian game.
Game Theory - A (Short) Introduction 192 9/12/2011
9.2 General definitions
Notations:
Pr(|t
i
): probability assigned by the belief ot type t
i
of player i to
state .
a(j,t
j
): action taken by each type t
j
of each player j.

j
(): player js signal in state . Her action is this state is
a(j,
j
()). We denote
j
()=a(j,
j
()).

With these notations, the expected payoff of type t
i
of player i
when she chooses action a
i
is:
) ( chooses player other every and
action the chooses player in which profile action the is )) ( , (
states of set the is
: where
) )), ( , (( ) Pr(
e
e
e e e
e
j
i i i
i i i i
a j
a i a a

a a u t

O e

Game Theory - A (Short) Introduction 193 9/12/2011
9.2 General definitions
Definition 281.1 (Nash equilibrium of Bayesian game)
A Nash equilibrium of Bayesian game is a Nash equilibrium of
the strategic game (with vNM preferences) defined as follows:
players: the set of all pairs (i,t
i
) in which i is a player in the
Bayesian game and t
i
is one of the signals that i may receive;
actions: the set of actions of each player (i,t
i
) is the set of actions
of player i in the Bayesian game;
preferences: the Bernoulli payoff function of each player (i,t
i
) is
given by

O e

e
e e e ) )), ( , (( ) Pr(
i i i i
a a u t
Game Theory - A (Short) Introduction 194 9/12/2011
9.2 General definitions
Exercise 282.3 (Adverse selection)
Firm A (the acquirer) is considering taking over firm T (the target). It
does not know firm Ts value; it believes that this value, when firm T is
controlled by its own management, is at least $0 and at most $100, and
assigns equal probability to each of the 101 dollar values in this range
(uniform distribution). Firm T will be worth 50% more under firm As
management than it is under its own management. Suppose that firm A
bids y to take over firm T, and firm T is worth x (under its own
management). Then if T accepts As offer, As payoff is (3/2 x y) and
Ts payoff is y. If T rejects As offer, As payoff is 0 and Ts payoff is x.
Model this situation as a Bayesian game in which firm A chooses how
much to offer and firm T decides the lowest offer to accept. Find the
Nash equilibrium (equilibria?). Explain why the logic behind the
equilibrium is called adverse selection.
Game Theory - A (Short) Introduction 195 9/12/2011
9.3 Example concerning
information
9.3.1 More information may hurt
A decision-maker in a single-person decision problem cannot be
worse off if she has more information: if she wishes, she can ignore
the information. In a game, this is not true.
L M R
T
B
1,2 1,0 1,3
2,2 0,0 0,3
State
1

L M R
T
B
1,2 1,0 1,3
2,2 0,3 0,0
State
2


2

1
(Figure 283.1)
Game Theory - A (Short) Introduction 196 9/12/2011
9.3 Example concerning
information
Consider the two-player game in Figure 283.1. is 0 < < . In this
game, there is two states and neither player knows the state.
Player 2s unique best response to each action of player 1 is L:
if player 1 chooses T:
L yieds 2
M and R each yield 3/2
if player 2 chooses B:
L yields 2
M and R each yield 3/2.
Player 1s unique best response to L is B.
Thus, (B,L) is the unique Nash equilibrium. Each player get a
payoff of 2. The game has no mixed strategy Nash equilibrium.
Game Theory - A (Short) Introduction 197 9/12/2011
9.3 Example concerning
information
Consider now that player 2 is informed of the state: player 2s
signal function satisfies
2
(
1
)
2
(
2
).
In this game, (T,(R,M)) is the unique Nash equilibrium (each type of
player 2 has a strictly dominant action, to which T is player 1s
unique best response).
In this game, player 2s payoff is 3 (in each state). She is therefore
worse off when she knows the state !
To understand this result, R is good only in state
1
and M is good
only in state
2
while L is a compromise. Knowing the state leads
player 2 to choose either R or M, which induces player 1 to choose
T. There is no steady state in which player 2 chooses L, to induce
player 1 to choose B.
Game Theory - A (Short) Introduction 198 9/12/2011
9.6 Illustration: auctions
9.6.1 Introduction
In section 3.5, every bidder knows every other bidders valuation of
the object for sale. This is highly unrealistic!
Assume that a single object is for sale. Each bidder receives
independently some information (a signal) about the value of the
object to her:
if each bidders signal is simply her valuation, we say that the
bidders valuation are private (eg.: work of art whose beauty
interests the buyers);
if each bidders valuation depends on other bidders signals as
well as her own, we say that the valuations are common
(eg.:oil tract containing unknown reserves on which each bidder
has conducted a test)
We will consider models in which bids for a single object are
submitted simultanesously (bids are sealed) and the participant
who submits the highest bid obtains the object.
Game Theory - A (Short) Introduction 199 9/12/2011
9.6 Illustration: auctions
We will consider both first-price (the winner pays the price she
bids) and second-price (the winner pays the highest of the
remaining bids) auctions.
Note that the argument that the second-price rule corresponds to
an open ascending auction (English auction) depends upon the
bidders valuations being private. In a common valuation setup, the
open ascending information reveals information to bidders, they do
not have access to in a sealed bid procedure.

9.6.2 Independent private values
Each bidder knows that all other bidders valuations are at least v-
(where v- 0) and at most v+. She believes that the probability
that any given bidders valuation is at most v is F(v), independent of
all other bidders valuations, where F is a continuous increasing
function (CDF).
Game Theory - A (Short) Introduction 200 9/12/2011
9.6 Illustration: auctions
The preferences of bidder whose valuation is v are represented by
a Bernoulli payoff function that assigns 0 to the outcome in which
she does not win the object and v-p to the outcome in which she
wins the object and pays the price p (quasi-linear payoff function).
This amounts to consider that the bidder is risk-neutral.
We assume that the expected payoff of a bidder whose bid is tied
for first place is (v-p)/m, where m is the number of tied winning
bids.
We denote P(b) the price paid by the winner of the auction when
the profile of bids is b:
for a first-price auction, P(b) is the winning bid (the largest b
i
)
for a second-price auction, P(b) is the highest bid made by a
bidder different from the winner
Game Theory - A (Short) Introduction 201 9/12/2011
9.6 Illustration: auctions
The Bayesian game that models first- and second-price auctions
with independent private valuations is therefore:
players: the set of bidders 1,n
states: the set of all profiles (v
1
, v
n
) of valuations, where v-
v
i
v+ for all i
actions: each players set of actions is the set of possible bids
(nonnegative numbers)
signals: the set of signal that each player may observe is the
set of possible valuations (the signal function is
i
(v
1
, v
n
) =
v
i
).
beliefs: every type of player i assigns probability F(v
1
) F(v
2
)
F(v
i-1
) x F(v
i+1
) F(v
n
) to the event that the valuation of every
other player j is at most v
i
.
Game Theory - A (Short) Introduction 202 9/12/2011
9.6 Illustration: auctions
payoff functions:





Nash equilibrium in a second-price sealed-bid auction: in a
second-price sealed-bid auction with imperfect information about
valuations (as in the perfect information setup), a players bid equal
to her valuation weakly dominates all her other bids:
consider some type v
i
of some player i and let b
i
be a bid not equal to v
i
for all bids by all types of all the other players, the expected payoff of
type v
i
of player i is at least as high when she bids v
i
as it is when she
bids b
i
, and for some bids by the various types of the other players, her
expected payoff is greater when she bids v
i
than it is when she bids b
i

= >
= = s
=
i j b b
m b b i j b b m b P v
v v b u
i j
i j i j i
n i
some for if 0
players for and all for if / )) ( (
)) ,... ( , (
1
Game Theory - A (Short) Introduction 203 9/12/2011
9.6 Illustration: auctions
Exercise 294.1 (Weak domination in a second-price sealed-bid
auction)
Show that for each type v
i
of each player i in a second-price
sealed-bid auction with imperfect information about valuations the
bid v
i
weakly dominates all other bids.

We conclude that a second-price sealed-bid auction with imperfect
information about valuations has a Nash-equilibrium in which every
type of every player bids her valuation.

Exercise 294.2 (Nash equilibria of a second-price sealed-bid
auction)
For every player i, find a Nash equilibrium of a second-price
sealed-bid auction in which player i wins.
Game Theory - A (Short) Introduction 204 9/12/2011
9.6 Illustration: auctions
Nash equilibrium in a first-price sealed-bid auction
in case of perfect information, the bid v
i
by type v
i
of player i
weakly dominates any bid greater than v
i
, does not weakly
dominate bids less than v
i
, and is itself weakly dominated by
any such lower bid.
So, the game under imperfect information may have a Nash
equilibrium in which each bidder bids less than her valuation.
Take the case of two bidders and each players valuation being
distributed uniformly between 0 and 1 (this assumption means
that the fraction of valuations less than v is exactly v, so that
F(v) = v for all v with 0 v 1).
Denote by
i
(v) the bid of type v of player i.
In this case, the game has a (symmetric) Nash equilibrium in
which the function
i
is the same for both players, with
i
(v) =
v for all v (each type of each player bids exactly half her
valuation).
Game Theory - A (Short) Introduction 205 9/12/2011
9.6 Illustration: auctions
Proof:
suppose that each type of bidder 2 bids in this way;
as far as player 1 is concerned, player 2s bids are
uniformly distributed between 0 and ;
thus, if player 1 bids more than , she surely wins. If she
bids b
1
, the probability that she wins is the probability
that player 2s valuation is less than 2b
1
, which is 2b
1
;
consequently, her payoff function of her bid is:

>
s s
2
1
if
2
1
0 if ) ( 2
1 1 1
1 1 1 1
b b v
b b v b
Player 1s expected payoff
v
1
0 v
1
b
1

(Figure 295.1)
Game Theory - A (Short) Introduction 206 9/12/2011
9.6 Illustration: auctions
This function is maximized at v
1
(this can easily be seen
graphically on Figure 295.1) or established mathematically.
Both player are identical. So, player 2 bids also half is valuation,
conditional on player 1 bidding half is valuation.
Thus, the game has a Nash equilibrium in which each player bids
half his valuation.
When the number n of bidder exceeds 2, a similar analysis shows that
the game a (symmetric) Nash equilibrium in which every bidder bids the
fraction 1 1/n of her valuation.
Interpretation: in this example (but also for any distribution F satisfying
our assumptions):
choose n-1 valuations randomly and independently, each
according to the cumulative distribution F
the highest of these n-1 valuations is a random variable. Denote it
X;
Fix a valuation v. Some values of X are less than v and others are
greater.


Game Theory - A (Short) Introduction 207 9/12/2011
9.6 Illustration: auctions
Consider the distribution of X in those cases in which it is less than
v. The expected value of this distribution is:

Then, the following proposition holds:







Application for the case of 2 bidders and uniform distribution:
for any valuation v of player 1, the cases in which player 2s
valuation is less than v are distributed uniformly between 0
and v;
so the expected value of player 2s valuation conditional on
being less than v is v.
) ( v X X E <
If each bidders valuation is drawn independently from the
same continuous and increasing cumulative distribution, a
first-price sealed-bid auction (with imperfect information
about valuations) has a (symmetric) Nash equilibrium in
which each type v of each player bids E(X|X<v), the
expected value of the highest of the other players bids
conditional on v being higher than all the other valuations.
Game Theory - A (Short) Introduction 208 9/12/2011
9.6 Illustration: auctions
Comparing equilibria of first- and second-price auctions
As in the case of perfect information, under the assumptions of
this section, first- and second-price auctions are revenue
equivalent;
Consider the equilibrium of a second-price auction in which
every player bids her valuation:
the expected price paid by the bidder with valuation v who
wins is the expectation of the highest of the other n-1
valuations, conditional on this maximum being less than
v;
in notation, this is E(X|X<v);
we have just seen that this is precisely the bid a player
with valuation v in a first-price auction (and hence, the
amount paid by such a player if she wins);
as in both case, the winner with highest valuation win,
both auctions yield the auctioneer the same revenue!
Game Theory - A (Short) Introduction 209 9/12/2011
9.6 Illustration: auctions
Exercise 296.1 (Auctions with risk-averse bidders)
Consider a variant of the Bayesian game defined earlier in this
section in which the players are risk averse. Specifically, suppose
each of the n players preferences are represented by the expected
value of the Bernoulli payoff function x
1/m
, where x is the players
monetary payoff and m > 1. Suppose also that each players
valuation is distributed uniformly between 0 and 1. Show that the
Bayesian game that models a first-price sealed-bid auction under
these assumptions has a (symmetric) Nash equilibrium in which
each type v
i
of each player i bids:




Note that the solution of the problem max
b
[b
k
(v-b)
l
] is kv/(k + l).

| |
i i
v
n m
b
|
|
.
|

\
|
+
=
1 ) 1 (
1
1
Game Theory - A (Short) Introduction 210 9/12/2011
9.6 Illustration: auctions
Compare the auctioneers revenue in this equilibrium with her
revenue in the symmetric Nash equilibrium of a second-price
sealed-bid auction in which each player bids her valuation (note
that the equilibrium of the second-price auction does not depend on
the players payoff functions).

9.6.3 Interdependent valuations
In this setup, each players valuation depends on the other players
signals as well as her own.
Denote the function that gives player is valuation by g
i
, and
assume that it is increasing in all the signals.
Let P(b) be the function that determines the price paid by the
winner as a function of the profile b of bids.
Game Theory - A (Short) Introduction 211 9/12/2011
9.6 Illustration: auctions
The following Bayesian game models first- and second-price
auctions with common valuations:
players: the set of bidders 1,n
states: the set of all profiles (t
1
, t
n
) of signals that the players
may receive
actions: each players set of actions is the set of possible bids
(nonnegative numbers)
signals: the signal function
i
of each player i is the set of
possible valuations (the signal function is
i
(v
1
, v
n
) = v
i
: each
player observes her own signal).
beliefs: each type of each player believes that the signal of
every type of every other player is independent of all the other
players signals.

Game Theory - A (Short) Introduction 212 9/12/2011
9.6 Illustration: auctions
payoff functions:





Nash equilibrium in a second-price sealed-bid auction
We analyze the case of two bidders, each bidders signal is
uniformly distributed from 0 to 1 and the valuation of each
bidder i is v
i
= t
i
+ t
j
, where j is the other player and
0 (the case = 1 and = 0 is the private value case and the
case = is called pure common value.
The assumption is that a bidder does not know any other
players signal but, as the analysis will show, other players
bids contain some information about the other players signals.

= >
= = s
=
i j b b
m b b i j b b m b P t t g
t t b u
i j
i j i j n i
n i
some for if 0
players for and all for if / )) ( ) ... ( (
)) ,... ( , (
1
1
Game Theory - A (Short) Introduction 213 9/12/2011
9.6 Illustration: auctions
Under these assumptions, a second-price sealed-bid auction has a
Nash equilibrium in which each type t
i
of each player i bids (+) t
i
.
Proof: to determine the expected payoff of type t1 of player 1, we need
to find:
the probability with which she wins
the expected price she pays
the expected value of player 2s signal if she wins
Probability that player 1 win:
given that player 2s bidding function is (+) t
2
, player 1s bid of b
1

wins only if b
1
(+) t
2
, or if :



t
2
is distributed uniformly between 0 and 1. So, the probability that
is is at most b
1
/ (+) is b
1
/ (+). Thus, a bid b
1
by player 1 wins
with probability b
1
/ (+).
) (
1
2
o +
s
b
t
Game Theory - A (Short) Introduction 214 9/12/2011
9.6 Illustration: auctions
Expected price player 1 pays if she wins:
the price she pays is equal to the player 2 bid;
the player 2 bid, conditional on being less than b
1
, is distributed
uniformly between 0 and b
1
. Thus, the expected value of player
2s bid, given that it is less than b
1
is b
1
.
Expected value of player 2s signal if player 1 wins:
Player 2 bid, given her signal t
2
, (+) t
2
. So, the expected value of
signal that yield a bid less than b
1
is b
1
/ (+).
The expected payoff if she bids b
1
is the difference between her
expected valuation (given her signal t
1
and the fact that she wins) and
the expected price she pays, multiplied by her probability of
winning. Using the previous results, we get:
( )
1 1 1
2
1
1
1
1
) ( 2
) ( 2 ) (
2
1
) (
2
1
b b t
b
b
b
t +
+
=
+
|
|
.
|

\
|

+
+ o
o
o
o o
o
Game Theory - A (Short) Introduction 215 9/12/2011
9.6 Illustration: auctions
This function is maximized at b
1
=(+)t
1
: so, if each type t
2
of
player 2 bids =(+)t
2
, any type t
1
of player 1 optimally bids
=(+)t
1.
The arguments are symmetric for player 2. We therefore get a
symmetric Nash equilibrium.

Exercise 299.1 (Asymmetric Nash equilibria of second-price
sealed-bid common value auctions)
Show that when ==1, for any value > 0, the game has an
(asymmetric) Nash equilibrium in which each type t
1
of player 1
bids (1+) t
1
and each type t2 of player 2 bids (1 + 1/) t
2
.
Game Theory - A (Short) Introduction 216 9/12/2011
9.6 Illustration: auctions
Note that when player 1 calculates her expected value of the
object, she finds the expected value of player 2s signal given that
her bid wins. The fact that her bid wins is, in fact, a bad news
about the level of other player valuation. A bidder who does not
take account of this fact is said to suffer from the winners curse.

Nash equilibrium in a first-price seald-bid auction
A first-price sealed-bid auction has a Nash equilbrium in which
each type t
i
of each player i bids (+) t
i
.

Exercise 299.2 (First-price sealed bid auction with common
values)
Verify that a first-price sealed bid auction has a Nash equilibrium in
which the bid of each type t
i
of each player i is (+) t
i
.
Game Theory - A (Short) Introduction 217 9/12/2011
9.6 Illustration: auctions
Comparing equilibria of first- and second-price auctions:
The revenue equivalence of first- and second-price auctions
holds also under common valuations:
in each case, the expected price paid by the winner (for
the symmetric equilibrium) is (+) t
i
.
in each case, the bidder wins if she has the highest
valuation (this is to say, with the same probability).
In fact, the revenue equivalence principle holds much more
generally (see Meyrson Lemma).
Game Theory - A (Short) Introduction 218 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
9.8.1 First-price sealed bid auctions
We construct here a symmetric equilibrium of a first-price sealed
bid auction for a generic distribution F of valuations that satisfies
the assumptions in Section 9.6.2 and is differentiable on (v-, v+).
Denote the bid of type v of bidder i by
i
(v).
In a symmetric equilibrium, every player uses the same bidding
function (so
i
(v)= for some function ).
Assume:
is increasing in valuation (seems reasonable)
is differentiable.
Then:
then there is a condition that must satisfy in any symmetric
equilibrium
exactly one function satisfies this condition
this function is increasing.
Game Theory - A (Short) Introduction 219 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
Suppose that all n-1 players other than i bid according to the
increasing differentiable function .
Then, given the assumption on F, the probability of a tie is zero.
Hence, for any bid b, the expected payoff of player i when her
valuation is v and she bids b is :

(v b) Pr(Highest bid is b) = (v-b) Pr(All n-1 other bids b)

A player bidding according to the function bids at most b, for (v-)
b (v+), if her valuation is at most
-1
(b) (the inverse evaluated
at b).
Game Theory - A (Short) Introduction 220 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
Thus, the probability that the bids of the n-1 other players are all at
most b is the probability that the highest of n-1 other players are all
at most b is the probability that the highest of n-1 randomly
selected valuations (denoted X in section 9.6.2) is at most
-1
(b).
Denoting the CDF of X by H, the expected payoff is thus:

(v b) H(
-1
(b)) if (v-) b (v+)

and 0 is b < (v-), v-b if b > (v+)


Game Theory - A (Short) Introduction 221 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
In a symmetric equilibrium in which every player bids according to
, we have (v) v if v > v- and (v-)=v-:
if v > v- and (v) > v, then a player with valuation v wins with
positive probability (players with valuations less than v bid less
than (v) because is increasing);
if she wins, she obtains a negative payoff while she obtains a
payoff of 0 by bidding v. So, for equilibrium, we need (v) v if
v > v-.
given that satisfies this condition, if (v-)>v-, then a player
with valuation v- wins with positif probability and obtains a
negative payoff. Thus, (v-)v-. But, if (v-)<v- bids v-, then
players with valuations slightly greater than v- also bid less
than v- (because is continuous). So that a player with
valuation v- who increases her bid slightly wins with positive
probability and obtains a positive payoff if she does so. We
conclude that (v-)=v-.
Game Theory - A (Short) Introduction 222 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
The expected payoff of a player of type v when every other player
uses the bidding function is differentiable on (v-,(v+))
given that is increasing and differentiable
given then (v-) = v-
and, if v > v-, is increasing at v-.
Thus, the derivative of this expected payoff with respect to b is
zero at any best response less than (v+) :




knowing that the derivative of
-1
at the point b is
0
)) ( ( '
)) ( ( ' ) (
)) ( ( : F.O.C.
1
1
1
=

b
b H b v
b H
| |
|
|
)) ( ( '
1
1
b

| |
Game Theory - A (Short) Introduction 223 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
In a symmetric equilibrium in which every player bids according ,
the best response of type v of any given player to the other players
strategies is (v). Because is increasing, we have (v)< (v+) for
v < v+. So, (v) must satisfy the F.O.C. whenever v- < v < v+.
If b = (v), then
-1
(b)=v. So that substituting b= (v), then
-1
(v) =
v, so that substituting b = (v) into the F.O.C. and multiplying by
(v) yields:


The left-hand side of the differential equation is the derivative with
respect to v of (v) H(v). Thus, for some constant C:
+ < < = + v v v v vH v H v v H v for ) ( ' ) ( ' ) ( ) ( ) ( ' | |
}

+ < < + =
v
v
v v v C dx x xH v H v for ) ( ' ) ( ) ( |
Game Theory - A (Short) Introduction 224 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
The function is bounded (as it differentiable), so considering the
limit as v approaches v-, we deduce that C = 0.
We conclude that if the game has a symmetric Nash equilibrium in
which each players bidding function is increasing and
differentiable on (v-,v+), then this function is defined by:





Note that, the function H being the CDF of X, the highest of n-1
independently drawn valuations. Thus *(v) is the expected value
of X conditional on its being less than v:
=
+ < < =
}
v v
v v v
v H
dx x xH
v
) ( *
and
for
) (
) ( '
) ( *
v
- v
|
|
) | ( ) ( * v X X E v < = |
Game Theory - A (Short) Introduction 225 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
Note finally that, using integration by parts, the numerator in the
expression *(v) is:



Given H(v) = (F(v))
n-1
(the probability that n-1 valuations is at most
v), we have:





We see that *(v) < v for v- < v < v+.
}

v
v
dx x H v vH ) ( ) (
+ < < = =

} }
v v v
v F
dx x F
v
v H
dx x H
v v
n
n
for
)) ( (
)) ( (
) (
) (
) ( *
1
v
- v
1
v
- v
|
Game Theory - A (Short) Introduction 226 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
Exercise 309.2 (Property of the bidding function in a first-price
auction)
Show that the bidding function * is increasing.
5. Extensive Games
with Perfect
Information: Theory
Framework

Strategic games suppress the sequential structure of decision-
making: everything is about ex-ante anticipations and
simultaneous decisions.

Extensive game describes explicitly the sequential structure of
decision-making, allowing us to study situations in which each
decision-maker is free to change her mind as events unfold.

In this setup, perfect information means that each decision-
maker is fully informed about all previous actions.
Game Theory - A (Short) Introduction 228 9/12/2011
5.1 Extensive games with
perfect information
5.1.1 Definition
We add to players and preferences, the order of the players moves
and the actions each play may take at each point.
Each possible sequence of actions form a terminal history.
The function that gives the player who moves at each point in each
terminal history is the player function.
So, the components of an extensive game are:
The players
The terminal histories
The player function
The preferences for the players
Game Theory - A (Short) Introduction 229 9/12/2011
5.1 Extensive games with
perfect information
Example 154.1: Entry game
An incumbent faces the possibility of entry by a challenger (eg.:
new entrant in an industry). The challenger may enter or not. If
it enters, the incumbent may either acquiesce or fight.

Extensive game components:
Players: Incumbent, Challenger
Terminal histories: (In, Acquiesce), (In, Fight), Out
Player function : player(Start) = Challenger, player(In) =
Incumbent
Preferences : ?


Game Theory - A (Short) Introduction 230 9/12/2011
5.1 Extensive games with
perfect information
Note that the set of actions available to each player is NOT
part of the game description. But it can be deduced from the
description of the game (after any sequence of events, a player
chooses an action).

Entry Game
Game Theory - A (Short) Introduction 231 9/12/2011
Actions
Challenger: {In,Out}
Incumbent: {Fight, Acquiesce)
5.1 Extensive games with
perfect information
Terminal histories are a set of sequences:
The first element of the sequence starts the game
The order of the sequence depicts the order of actions by players
Entry game
{(In, Acquiesce), (In, Fight), (Out) }

Define:
Subhistories of a finite sequence (a
1
, a
2
, , a
k
) of actions to be:
The empty sequence of no actions (empty history,
representing the start of the game)
All sequences of the form (a
1
, a
2
, ,a
m
), where 1 m k.
The entire sequence is a subhistory of itself.
A subhistory NOT equal to the entire sequence is called a proper
subhistory.


Game Theory - A (Short) Introduction 232 9/12/2011
5.1 Extensive games with
perfect information
Entry game:
The subhistories of (In, Acquiesce) are the empty history and the
sequences (In) and (In, Acquiesce).
The proper subhistories are the empty history and the sequence
(In).
Definition 155.1 (Extensive game with perfect information)
An extensive game with perfect information consists of
A set of players
A set of sequences (terminal histories) with the property that no sequence
is a proper subhistory of any other sequences
A function (the player function) that assigns a player to every sequence that
is a proper subhistory of some terminal history
For each player, preferences over the set of terminal histories
Game Theory - A (Short) Introduction 233 9/12/2011
The set of terminal histories is the set of all sequences of actions that may occur. Terminal
histories represent outcomes of the game.
If the length of the longest terminal history is finite, we way that the game has a finite
horizon. If the game has finite horizon and finitely many terminal histories, we say that the
game is finite.
5.1 Extensive games with
perfect information
Entry game:
Suppose that the best outcome for the challenger is that it
enters and the incumbent acquiesces, and the worst outcome is
that it enters and the incumbent fights, whereas the best
outcome for the incumbent is that the challenger stays out, and
the worst outcome is that it enters and there is a fight.
The situation is modeled as follow:
Players: {Challenger, Incumbent}
Terminal histories: (In,Acquiesce), (In, Fight), (Out)
Player function: P(0) = Challenger, P(In) = Incumbent
Preferences:
Challenger: u(In,Acquiesce) = 2, u(Out) = 1, u(In,Fight) = 0
Incumbent: u(Out) = 2, u(In,Acquiesce) = 1, u(In,Fight) = 0
Game Theory - A (Short) Introduction 234 9/12/2011
5.1 Extensive games with
perfect information
Game Theory - A (Short) Introduction 235 9/12/2011
Player
Start of the game
Action
Payoffs
The sets of actions can be deduced from the set of terminal histories and the player function :
A(h) = {a: (h,a) is a history}
Where h is some nonterminal history, (h,a) is a history, a is one of the actions available to the
player who moves after h.
Eg.: A(In) = {Acquiesce, Fight}
5.1 Extensive games with
perfect information
Exercise 156.2
a. Represent in a diagram the two-player
extensive game with perfect information in
which the terminal histories are (C,E),
(C,F), (D,G), and (D,H), the player
function is given by P(0) = 1 and P(C) =
P(D) = 2, player 1 prefers (C,F) to (D,G)
to (D,H) and player 2 prefers (D,G) to
(C,F) to (C,E).

b. Write down the set of players, the set of
terminal histories, player function, and
players preferences for the game
represented on the right side of the slide.
Game Theory - A (Short) Introduction 236 9/12/2011
5.1 Extensive games with
perfect information
An extensive game with perfect information models a situation
in which each player, when choosing an action, knows all
actions chosen previously and always move alone. Typical
situations modeled this way are:
A race between firms developing a new technology;
A race between directors to become CEO;
Games like chess, ticktacktoe,

Two extension of extensive game with perfect information are:
Allowing players to move simultaneously exists
Allowing arbitrary patterns of information
Game Theory - A (Short) Introduction 237 9/12/2011
5.1 Extensive games with
perfect information
Entry game solution:
Solution: the challenger will enter and the incumbent will acquiesce
Analysis:
The challenger sees that, if he enters, the incumbent will
acquiesce
As the incumbent will acquiesce in case of entry, the
challenger is better off entering than staying out


Backward induction can not always be used to solve extensive
games:
For infinite horizon game, there is no end point from which to start
the induction
But even for finite horizon game

Game Theory - A (Short) Introduction 238 9/12/2011
This line of argument is a backward induction.
5.1 Extensive games with
perfect information
Example: in this game, the Challenger sees that the Incumbent
is indifferent between Acquiesce and Fight if he enters. The
question of whether to enter or not remains open.
Game Theory - A (Short) Introduction 239 9/12/2011
5.1 Extensive games with
perfect information
Another approach to defining equilibrium takes off from the
notion of Nash equilibrium: it seeks steady states.

In games in which backward induction is well-defined, this
approach turns out to lead to the backward induction outcome.
So, there is no conflict between the two approaches.
Game Theory - A (Short) Introduction 240 9/12/2011
5.2 Strategies and outcomes
5.2.1 Strategies
Definition 159.1 ((full) strategy)
A (full) strategy of player i in an extensive game with perfect
information is a function that assigns to each history h after which it
is player is turn to move (P(h) = i, where P is the player function)
an action in A(h), the set of actions available after h.
Game Theory - A (Short) Introduction 241 9/12/2011
Player 1 has 2 strategies: C and D
Player 2 has 4 strategies:
Action Assigned to
History C
Action Assigned to
History D

Strategy 1
E G
Strategy 2
E H
Strategy 3
F G
Strategy 4
F H
5.2 Strategies and outcomes
Notation
Player 1 strategies: C, D
Player 2 strategies: EG, EH, FG, FH
Actions are written in the order in which they occur in the
game.
If actions are available at the same stage of the game, they are
written from left to right as they appear in the game diagram.
Game Theory - A (Short) Introduction 242 9/12/2011
Each player full strategy is more than a plan of action or contingency
plan: it specifies what the player does for each of the possible choice of
the other player.

In other words, if the player appoints an agent to play the game for her
and tell the agent her strategies, then the agent has enough information
to carry out her wishes, whatever action the other players take.
5.2 Strategies and outcomes
Exercise:
Determine the strategies of the player 1 in the following game:
Game Theory - A (Short) Introduction 243 9/12/2011
5.2 Strategies and outcomes
Solution: CG, CH, DG, DH
Game Theory - A (Short) Introduction 244 9/12/2011
Each (full) strategy specifies an action after history (C,E)
even if it specifies the action D at the beginning of the
game!

A (full) strategy must specify an action for every history
after which it is the player turn to move, even for histories
that, if the strategy is followed, do not occur (this is the
difference between plan of actions and a full
strategy).

A way to interpret (full) strategy is that it is a plan of action
that specifies players actions even if they make mistakes.
Eg. : DG may read as I choose D but, if I do a mistake
and I play C, then I will play G if the other player
plays E.
5.2 Strategies and outcomes
5.2.2 Outcomes
A strategy profile is the vector of strategies played by each player.
It determines the terminal history that occurs. We denote strategy
profile by s. The terminal history associated with the strategy
profile s is the outcome of s and is denoted O(s).

Example:
The outcome
The strategy profile (DG,E) is associated to terminal
history D
The strategy profile (CH,E) is associated to terminal
history (C,E,H)
Note that the outcome O(s) of the strategy profile s depends
only on the players plans of action, not their full strategies.
Game Theory - A (Short) Introduction 245 9/12/2011
5.3 Nash Equilibrium
Definition 161.2 (Nash Equilibrium of extensive game with
perfect information)
The strategy profile s* in an extensive game with perfect
information is a Nash equilibrium if, for every player i and
every strategy r
i
of player i, the terminal history O(s*) generated
by s* is at least as good according to player is preferences as
the terminal history O(r
i
,s*
-i
) generated by the strategy profile (r
i

,s*
-i
) in which player i chooses r
i
while every other player j
chooses s*
j
. Equivalently, for each player i :

u
i
(O(s*)) u
i
(O(r
i
,s*
-i
) ) for every strategy r
i

Game Theory - A (Short) Introduction 246 9/12/2011
5.3 Nash Equilibrium
One way to find the Nash equilibria of an extensive game in
which each player has finitely many strategies is :
To list each players (full) strategies;
To combine strategies of all players to list strategies profiles;
To find the outcome of each strategy profile;
To analyze this information as a strategic game.
This is known as the strategic form of the extensive game.
Game Theory - A (Short) Introduction 247 9/12/2011
The set of Nash equilibria of any extensive game
with perfect information is the set of Nash equilibria
of its strategic form.
5.3 Nash Equilibrium
Example 162.1: the entry game
Game Theory - A (Short) Introduction 248 9/12/2011
Player 1 strategies: {In,Out}
Player 2 strategies: {Acquiesce,Fight}
Strategic form of the game
Incumbent
Challenger
In
Out
Acquiesce Fight
2*,1*
1,2*
0,0
1*,2*
Nash equilibria
-(In,Acquiesce) : the one identified by
backward induction
-(Out,Fight): this also a steady state. No
player has incentive to deviate.
5.3 Nash Equilibrium
How to interpret the Nash Equilibrium (Out,Fight)?
This situation is never observed in the extensive game
A solution to escape from this difficulty is by considering a slighthly
perturbed steady state in which, on rate occasions, nonequilibrium
actions are taken :
Players makes mistakes or deliberately experiment
Perturbations allow each player eventually to observe every
other players action after every history

Another important point to note is that extensive games
embodies the assumption that the incumbent cannot commit, at
the beginning of the game, to fight if the challenger enters. If
such a commitment was credible, the challenger would stay
out. But the threat is not credible (because it is irrational to
fight after entry).
Game Theory - A (Short) Introduction 249 9/12/2011
5.3 Nash Equilibrium
Exercise 163.1 (Nash equilibria of extensive games)
Find the Nash equilibria of the extensive game represented by
the figure (when constructing the strategic form of each game,
be sure to include all the strategies of each player).
Game Theory - A (Short) Introduction 250 9/12/2011
5.4 Subgame perfect equilbrium
5.4.1 Definition
The notion of Nash equilibrium ignores the sequential structure of
an extensive game. This may lead to steady states that are not
robust (in the sense that they do not appear as such in the
extensive game).
We consider now a new notion of equilibrium that models a robust
steady state. This notion requires:
(i) That each players strategy to be optimal
(ii) After every possible history

Subgame: for any nonterminal history h, the subgame following h
is the part of the game that remains after h has occurred.
Example: in the entry game, the subgame following the history In is the game
in which the incumbent is the only player and there are two terminal histories
: Acquiesce and Fight.
Game Theory - A (Short) Introduction 251 9/12/2011
5.4 Subgame perfect equilbrium
Definition 164.1 (Subgame of extensive game with perfect
information)
Let Gamma be an extensive game with perfect information, with
player function P. For any nonterminal history h of Gamma, the
subgame Gamma(h) following the history h is the following
extensive game:
Players: the players in Gamma
Terminal histories: the set of all sequences h of actions such that
(h,h) is a terminal history of Gamma
Player function: the player P(h,h) is assigned to each proper
subhistory h of a terminal history
Preferences: each player prefers h to h if she prefers (h,h) to
(h,h) in Gamma.

Note that the subgame following the empty history is the entire game.
Game Theory - A (Short) Introduction 252 9/12/2011
5.4 Subgame perfect equilbrium
A subgame perfect equilibrium is a strategy profile s* with the
property that in no subgame can any player i do better by
choosing a strategy different from s*
i
given that every player j
adheres to s*
j
.

Example: in the entry game, the Nash equilibrium (Out,Fight) is
not a subgame perfect equilibrium because in the subgame
following the history In, the strategy Fight is not optimal for the
incumbent: in this subgame (the In subgame), the incumbent is
better off choosing Acquiesce than it is choosing Fight.

Notation: Let h be a history and s a strategy profile to which
adhere afterwards h. We denote O
h
(s) the outcome generated
in the subgame following h by the strategy profile induced by s.
Game Theory - A (Short) Introduction 253 9/12/2011
5.4 Subgame perfect equilbrium
Example: the entry game
Let s be the strategy profile (Out,Fight)
Let h be the history In
If h occurs and, afterwards, the players adhere to s, the resulting
terminal history is O
h
(s) = (In,Fight)
Game Theory - A (Short) Introduction 254 9/12/2011
5.4 Subgame perfect equilbrium
Definition 166.1 (Subgame perfect equilibrium of extensive
game with perfect information)
The strategy profile s* in an extensive game with perfect
information is subgame perfect equilibrium if, for every player
i, every history h after which it is player is turn to move (P(h)=i),
and every strategy r
i
of player i, the terminal history O
h
(s*)
generated by s* after the history h is at least as good according
to payer is preferences as the terminal history O
h
(r
i
,s*
-i
)
generated by the strategy profile (r
i
,s*
-i
):

u
i
(O
h
(s*)) u
i
(Oh(r
i
,s*
-i
)) for every strategy r
i
of player i

Game Theory - A (Short) Introduction 255 9/12/2011
The key point is that payers strategy is required to be optimal for every
history after which it is the players turn to move, not only at the
start of the game (as in the definition of a Nash equilibrium)
5.4 Subgame perfect equilbrium
5.4.2 Subgame perfect equilibrium and Nash equilibrium

Every subgame perfect equilibrium is a Nash equilibrium (because in
a subgame perfect equilibrium, every players strategy is optimal, in
particular after the empty history)

A subgame perfect equilibrium generates a Nash equilibrium in every
subgame

A Nash equilibrium is optimal in any subgame that is reached when
the players follow theirs strategies.

Subgame perfect equilibrium requires moreover that each players
strategy is optimal after histories that do not occur if the players follow
their strategy.
Game Theory - A (Short) Introduction 256 9/12/2011
5.4 Subgame perfect equilbrium
Example 167.2 (Variant of the entry game)
Consider the variant of the entry game in which the incumbent
is indifferent between fighting and acquiescing if the challenger
enters. Find the subgame perfect equilibria.

Game Theory - A (Short) Introduction 257 9/12/2011
5.4 Subgame perfect equilbrium
Solution: both Nash equilibria (In,Acquiesce) and (Out,Fight)
are subgame perfect equilibria because, after history In, both
Fight and Acquiesce are optimal for the incumbent.

Exercice 168.1
Which of the Nash equilibria of the following game are subgame
perfect?
Game Theory - A (Short) Introduction 258 9/12/2011
5.4 Subgame perfect equilbrium
5.4.4 Interpretation
A Nash equilibrium corresponds to a steady state in an idealized
setting in which players long experience leads her to correct
beliefs about the other players actions.
A subgame perfect equilibrium of an extensive game corresponds
to a slightly perturbed steady state in which all players, on rare
occasions, take nonequilibrium actions. Thus, players know how
the other players will behave in every subgame.

Subgame perfect equilibrium is a plan of action specifying
players actions:
Not only after histories consistent with the strategy
But also after histories that result when the player chooses
arbitrary alternatives actions.
Game Theory - A (Short) Introduction 259 9/12/2011
5.4 Subgame perfect equilbrium
Alternative interpretation:
Consider an extensive game with perfect information in which:
each player has a unique best action at every history after
which it is her turn to move;
horizon is finite;
In such a game, a player who knows the other players preferences
(eg: profit maximization) and knows that the other players are
rational may use backward induction to deduce her optimal
strategy.

The subgame perfect equilibrium is the outcome of the players
rational calculations about each others strategies. Note that:
this interpretation is not tenable in games in which some player has more
than one optimal action after some history;
But an extension of the procedure of backward induction can be used to find
all subgame perfect equilibria of finite horizon games.


Game Theory - A (Short) Introduction 260 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
In a game with finite horizon, the set of subgame perfect
equilibria may be found more directly by using an extension of
the procedure of backward induction.

Define the length of a subgame to be the length of the longest
history in the subgame.

The procedure of backward induction works as follow:
(i) Start by finding the optimal actions of the players who move in
the last subgames (stage k);
(ii) Next, find the optimal actions of the players who move at stage
k-1, given the optimal actions we have found in all subgames k;
(iii) Continue the procedure up to stage 1.
Game Theory - A (Short) Introduction 261 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
Example
We first deduce that in the subgame of length 1 following history
(C,E), player 1 chooses G;
Then, at the start of the subgame of length 2 following the history
C, player 2 chooses E;
Then, at the start of the whole game, player 1 chooses D.
Game Theory - A (Short) Introduction 262 9/12/2011
In any game in which this procedure selects
a single action for the player who moves at
the start of each subgame, the strategy
profile thus selected is the unique subgame
perfect equilibrium of the game.
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
What happens in a game in which at the start of some
subgames, more than one action is optimal ?

Game Theory - A (Short) Introduction 263 9/12/2011
The solution is to traces back separately the
implications for behavior in the longer
subgames of every combination of optimal
actions in the shorter subgames.
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
Example 172.1
Game Theory - A (Short) Introduction 264 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
The game has three subgames of length 1, in each of which
player 2 moves:
In subgames following the histories C and D, player 2 is indifferent
between her two actions;
In the subgame following the history E, player 2s unique optimal
action is K.
Game Theory - A (Short) Introduction 265 9/12/2011
There are four combinations of player 2s
optima actions in the subgame of length 1:
FHK
FIK
GHK
GIK
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
The game has a single subgame of length 2, namely the whole
game, in which player 1 moves first. We now consider player
1s optimal action in this game for every combination of optimal
actions of player 2 in the subgame of length 1:
For the combinations FHK and FIK of optimal actions of player 2,
player 1s optimal action at the start of the game is C;
For the combination GHK of optimal actions of player 2, the actions
C, D, and E are optimal for player 1;
For the combination GIK of optimal actions of player 2, player 1s
optimal action at the start of the game is D.
Game Theory - A (Short) Introduction 266 9/12/2011
The strategy pairs isolated by the procedure are (C,FHK),
(C,FIK), (C,GHK), (D,GHK) and (D,GIK)
The set of strategy profiles that this procedure yields for the whole
game is the set of subgame perfect equilibria of the game.
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
Two important propositions:
Game Theory - A (Short) Introduction 267 9/12/2011
PROPOSITION 172.1 (Subgame perfect equilibrium of finite
horizon games and backward induction)

The set of subgame perfect equilibria of a finite horizon
extensive game with perfect information is equal to the set of
strategy profiles isolated by the procedure of backward
induction.
PROPOSITION 173.1 (Existence of subgame perfect
equilibrium)

Every finite extensive game with perfect information has a
subgame perfect equilibrium.
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
Exercise 173.2
Find the subgame perfect equilibria of this game:
Game Theory - A (Short) Introduction 268 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
Exercise 176.1 (Dollar auction)
An object that two people each value at v, a positive integer, is
sold in an auction. In the auction, the people take turns bidding;
a bid must be a positive integer greater than the previous bid.
On her turn, a player may pass rather than bid, in which case
the game ends and the other player receives the object; both
players pay their last bids (if any) (if player 1 passes initially, for
example, player 2 receives the object and makes no payment; if
player 1 bids 1, player 2 bids 3 and then player 1 passes, player
2 obtains the object and pays 3, and player 1 pays 1). Each
persons wealth is w, which exceeds v. Neither player may bid
more than her wealth. For v=2 and w=3, model the auction as
an extensive game and find its subgame perfect equilibria.

Game Theory - A (Short) Introduction 269 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
Exercise 176.2 (A synergistic relationship)
Two individuals are involved in a synergistic relationship.
Suppose that the players choose their effort levels sequentially
(rather than simultaneously). First individual 1 chooses her effet
level a
1
. Then individual 2 chooses her effort level a
2
. An effort
level is a nonnegative number, and individual is preferences
(for i = 1,2) are represented by the payoff function a
i
(c+a
j
-a
i
),
where j is the other individual and c > 0, some constant.

Find the subgame perfect equilibria.

Game Theory - A (Short) Introduction 270 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
Exercise 174.2 (An entry game with a financially constrained
firm)
An incumbent in an industry faces the possibility of entry by a challenger. First the
challenger chooses whether to enter. If it does not enter, neither firm has any
further action; the incumbents payoff is TM (it obtains the profit M in each of the
following T 1 periods). The challengers payoff is 0. If the challenger enters, it
pays the entry costs f > 0, and in each of T periods the incumbent first commits to
fight or cooperate with the challenger in that period, then the challenger chooses
whether to stay in the industry or to exit. If, in any period, the challenger stays in,
each firm obtains in that period the profit F < 0 if the incumbent fights and C >
max {F,f} if it cooperates. If, in any period, the challenger exits, both firms obtain
the profit zero in that period (regardless of the incumbents action); the incumbent
obtains the profit M > 2 C and the challenger the profit 0 in every subsequent
period. Once the challenger exits, it cannot reenter. Each firm cares about the sum
of its profits.

Find the subgame perfect equilibria of the extensive game.
Game Theory - A (Short) Introduction 271 9/12/2011
10. Extensive Games
with Imperfect
Information
Framework

We keep in this chapter the Extensive game setup: extensive
game describes explicitly the sequential structure of decision-
making, allowing us to study situations in which each decision-
maker is free to change her mind as events unfold.

In this imperfect information setup, each player, when
choosing her action, may not be informed of the other players
previous actions.
Game Theory - A (Short) Introduction 273 9/12/2011
10.1 Extensive games with
imperfect information
To describe an extensive game with perfect information, we
need to specify the set of players, the set of terminal histories,
the player function and the players preferences.

To describe an extensive game with imperfect information, we
need to add a specification of each players information about
the history at every point at which she moves:
Denote by

the set of histories after which player moves


We specify players information by partitioning

into a collection
of information sets (the collection is called the information
partition).
When making her decision, the player is information of the
information set that has occurred, but not of which history within
that set has occurred.

Game Theory - A (Short) Introduction 274 9/12/2011
10.1 Extensive games with
imperfect information
Example
Suppose player
moves after histories , and (

= , , )
is informed only that the history is or that it is either or
The player information partition is the two information sets
and , .
Note that if the player is not informed at all, her information
partition contains a unique information partition , , .
Important restriction
Denote by () the set of actions available to the player who
moves after history .
We allow two histories and to be in the same information set
only if = (

).
Why?
Game Theory - A (Short) Introduction 275 9/12/2011
10.1 Extensive games with
imperfect information
Note that we allow move of chance. So an outcome is a lottery
(a probability distribution function) over the set of terminal
histories.

Definition 314.1 (Extensive game with imperfect information)
A set of players
A set of sequences (terminal histories) having the property that no
sequence is a proper subhistory of some terminal history
A function (the player function) that assigns either a player or chance to
every sequence that is a proper subhistory of some terminal history
A function that assigns to each history that the player function assigns to
chance a probability distribution over the actions available after that history
(each probability distribution is independent of every other distribution).
For each player, a partition (information partition) of the set of histories
assigned to that player by the player function.
For each player, preferences over the set of lotteries over terminal histories.

Game Theory - A (Short) Introduction 276 9/12/2011
10.1 Extensive games with
imperfect information
Example 314.2: BoS as an extensive game
Games in which each player moves once and no player, when
moving, is informed of any other players action, may be modeled
as strategic games or extensive games with imperfect information.
BoS :
Each of two people chooses whether to go to a Bach of
Stravinsky concert
Neither person, when choosing a concert, knows the one
chosen by the other person.
Model this game as an extensive game with imperfect information.

Game Theory - A (Short) Introduction 277 9/12/2011
10.1 Extensive games with
imperfect information
Solution:
Players: the two people, say 1 and 2
Terminal histories: , , , , , , (, )
Player function: = 1, = = 2
Chance moves: None
Information partitions
Player 1: (a single information set: player 1 has a single
move and when she moves, she is informed that the
game is beginning)
Player 2: , (player 2 has a single move and when she
moves, she is not informed whether the history is or )
Preferences: given in the game description
Game Theory - A (Short) Introduction 278 9/12/2011
10.1 Extensive games with
imperfect information
Figure 315.1
Game Theory - A (Short) Introduction 279 9/12/2011
Indicates that the
histories are in the
same information set
10.1 Extensive games with
imperfect information
Example 317.1: Variant of Entry Game (the challenger, before
entering, takes an action that the incumbent does not observe)
An incumbent faces the possibility of entry by a challenger (see example
154.1)
The challenger has three choices:
Stay out
Prepare itself for combat and enter (preparation is costly but reduces
loss from fight)
Enter without preparations
A fight is less costly for the incumbent if the entrant is unprepared. But
regardless of entrants readiness, the incumbent prefers to acquiesce than to
fight.
The incumbent observes whether the challenger enters but not whether he is
prepared.
Model (graphically by a tree) this game as an extensive game with imperfect
information.


Game Theory - A (Short) Introduction 280 9/12/2011
10.1 Extensive games with
imperfect information
Figure 317.1
Game Theory - A (Short) Introduction 281 9/12/2011
10.2 Strategies
A strategy specifies the action the player takes whenever it is her turn to
move.
Definition 318.1 (Strategy in extensive game)
A (pure) strategy of player in an extensive game is a function that assigns
to each of

information sets

an action in (

) (the set of actions available


to player at the information set

).

In the BoS game, each player has a single information set at which two actions (
or ) are available. Thus, each player has two possible strategies: or . If
players have several information sets, a strategy specifies the list of actions at
each information set in the form (

1
,

2
,
).

Definition 318.3 (Mixed Strategy in extensive game)
A mixed strategy of a player in an extensive game is a probability
distribution over the players pure strategies.

With mixed strategies, players are allowed to choose their actions randomly.

Game Theory - A (Short) Introduction 282 9/12/2011
10.3 Nash equilibrium
Definition 318.4 (Nash equilibrium of extensive game)
Intuition: a strategy profile is a Nash equilibrium if no player has an
alternative strategy that increases her payoff, given the other
players strategies.
Formal definition: The mixed strategy profile

in an extensive
game is a (mixed strategy) Nash equilibrium if, for each player
and every mixed strategy

of player , player

expected payoff
to

is at least as large as her expected payoff to (

)
according to a payoff function whose expected value represents
players

preferences over lotteries.



Notes:
an equilibrium in which no players strategy entails any randomization (every players
strategy assigns probability 1 to a single action at each information set) is a pure Nash
equilbrium.
One way to find a Nash equilibrium of an extensive game is to construct the strategic form
of the game and analyze it as a strategic game.
Game Theory - A (Short) Introduction 283 9/12/2011
10.3 Nash equilibrium
Example 319.1: BoS as an extensive game
Each player has two strategies: and
The strategic form of the game is given in Figure 19.1
Thus the game has two pure Nash equilibria:
(, )
(, )

In the BoS game, player 2 is not informed of the action chosen by player 1 when
taking an action (her information set contains both the history and the history ).
However, players 2 experience playing the game tells her the history to expect.

Eg.: in steady state in which every person who plays the role of either player
chooses , each player knows (by experience) that the other player will choose
.
Game Theory - A (Short) Introduction 284 9/12/2011
10.3 Nash equilibrium
How may we extend the idea of subgame perfect equilibrium to
extensive game with imperfect information to deal with
situations in which the notion of Nash equilibrium is not
adequate?

Example 322.1: Entry game
The strategic form of the entry game in Example 317.1 is the
following:
Game Theory - A (Short) Introduction 285 9/12/2011
3,2* 1,1
4*,3* 0,2
2,4* 2*,4*
Acquiesce Fight
Ready
Unready
Out
10.3 Nash equilibrium
The game has two Nash equilbria:
(Unready, Acquiesce)
(Out, Fight)
(The game has also a Nash mixed strategy equilbrium in which the
challenger uses the pure strategy Out and the probability assigned by the
incumbent to Acquiesce is at most
1
2
).

As in Chapter 5 (perfect information), the Nash equilibrium (, ) is not
plausible. The notion of subgame perfect equilibrium eliminates this strategy
by requiring that each players strategy be optimal, given the other players
strategies, for every history after which she moves, regardless of whether
the history occurs if the players adhere to their strategies.

The natural extension of this idea to games with imperfect information
requires that each players strategy be optimal at each of her information
sets.
Game Theory - A (Short) Introduction 286 9/12/2011
10.3 Nash equilibrium
In Example 322.1, the incumbents action is unambigously suboptimal
at its information set because the incumbent prefers if the
challenger enters, regardless of whether the challenger is ready. So, any
equilbrium that assigns a positive probability to does not satisfy the
additional requirement introduced by the notion of subgame perfect
equilibrium.

However, the implementation of the idea in other may be less straightforward
because the optimality of an action at an information set may depend on the
history that has occurred. Consider for example a variant of the entry game in
which the incumbent prefers to fight than to accommodate an unprepared
entrant (see Figure 323.1).
Game Theory - A (Short) Introduction 287 9/12/2011
10.3 Nash equilibrium
Game Theory - A (Short) Introduction 288 9/12/2011
Figure 323.1
10.3 Nash equilibrium
Like the original game, (, ) is a Nash equilibrium. But:
given that now fighting is optimal if the challenger enters
unprepared, the reasonableness of the modified game
depends on the history the incumbent believes has occurred;
and the challengers strategy gives the incumbent no basis
on which to form such a belief.

Game Theory - A (Short) Introduction 289 9/12/2011
So, to study this situation, we must specify players beliefs.
10.4 Beliefs and sequential
equilibrium
A Nash equilibrium of a strategic game with imperfect
information is characterized by two requirements:
Each player chooses her best action given her belief about other
players
Each player belief is correct

The notion of equilibrium we define here:
Embodies these two requirements;
Insists that they hold at each point at which a player has to choose
an action (like subgame perfect equilibrium in extensive games with
perfect information).
Game Theory - A (Short) Introduction 290 9/12/2011
10.4 Beliefs and sequential
equilibrium
10.4.1 Beliefs
We assume that at an information set that contains more than one
history, the player whose turn it is to move forms a belief about the
history that has occurred;
We model this belief as a probability distribution over the histories
in the information set;
We call a collection of beliefs (one for each information set) a belief
system.

Definition 324.1
A belief system in an extensive game is a function that
assigns to each information set a probability distribution over
the histories in that information set.

Game Theory - A (Short) Introduction 291 9/12/2011
10.4 Beliefs and sequential
equilibrium
Example: the entry game (317.1)
The belief system consists of a pair of probability distributions:
One assigns probability 1 to the empty history (the
challenger belief at the start of the game)
The other assigns probabilities to histories Ready and
Unready (the incumbent belief after the challenger enter)

10.4.2 Strategies
Definition 324.2 (Behavioral strategy in extensive game)
A behavioral strategy of player in an extensive game is a
function that assigns to each

information sets

a
probability distribution over the action in (

), with the property


that each probability distribution is independent of every other
distribution.

Game Theory - A (Short) Introduction 292 9/12/2011
10.4 Beliefs and sequential
equilibrium
Note:
A behavioral strategy that assigns probability one to a single
action is equivalent to a pure strategy.
Behavioral strategies are assigned to actions in information
sets with mixed strategies are assigned to possible
combinations of pure strategies.
In all the games that we study, a behavioral strategy and
mixed strategy are equivalent but behavioral strategy are
easier to deal with.
Example: the BoS game (314.2)
Each player has a single information set;
So, a behavioral strategy for each player is a single probability
distribution over her actions.
In this game, the set of behavioral strategies is identical to the
set of mixed strategies.
Game Theory - A (Short) Introduction 293 9/12/2011
10.4 Beliefs and sequential
equilibrium
10.4.3 Equilibrium
Definition 325.1 (Assessment)
An assessment is an equilibrium if it satisfies the following two
requirements:
Sequential rationality: each players strategy is optimal
whenever she has to move, given her belief and the other players
strategies;
Consistency of beliefs with strategies: each players belief is
consistent with the strategy profile.

The sequential rationality generalizes the requirement of subgame perfect
equilibrium: each players strategy must be optimal in the part of the game
that follows each of her information sets, given the strategy profile and given
the players belief about the history in the information set that has occurred,
regardless of whether the information set is reached if the players follow their
strategies..
Game Theory - A (Short) Introduction 294 9/12/2011
10.4 Beliefs and sequential
equilibrium
Example 325 and Figure 326.1
Game Theory - A (Short) Introduction 295 9/12/2011
10.4 Beliefs and sequential
equilibrium
Player 1 strategies are indicated by the red branches:
Selects E at the start of the game;
Select J after the history (C,F)
Player 2 beliefs at her information set (number in brackets) is that
the history C has occurred with probability 2/3 and history D has
occurred with probability 1/3.

Sequential rationality requires that player 2 strategy be optimal at her
information set, given the subsequent behavior specified by player 1
strategy, even though this set is not reached if player 1 follows her
strategy. Player 2 expected payoff in the part of the game starting at
her information set is:
Strategy F : (2/3 x 0) + (1/3 x 1) = 1/3
Strategy G : (2/3 x 1) + (1/3 x 0) = 2/3
Sequential rationality requires Player 2 to select G.



Game Theory - A (Short) Introduction 296 9/12/2011
10.4 Beliefs and sequential
equilibrium
Sequential rationality requires also that player 1 strategy be optimal at
each of her two (one element) information sets, given player 2
strategy:
Player 1 optimal action after history (C,F) is J;
If Player 2 strategy is G, player 1 optimal actions at the start of the
game are D and E;

Thus, given player 2 strategy G, player 1 has two optimal strategies:
DJ and EJ.


Game Theory - A (Short) Introduction 297 9/12/2011
10.4 Beliefs and sequential
equilibrium
Sequential rationality requirements (more formal definition)
Denote (, ) an assessment ( is a profile of behavioral strategies and is a
belief system);
Let

be an information set of player ;


Denote

(, ) the probability distribution over terminal histories that results


if each history in

occurs with probability assigned to it by player

belief


(not necessarily the probability with which it occurs if the player adhere to )
and subsequently, the players adhere to the strategy profile ;





In Figure 326.1:
For the information set , , the probability distribution assigns 2/3 to
terminal history (C,G) and probability 1/3 to (D,G)
Game Theory - A (Short) Introduction 298 9/12/2011
Sequential rationality requires for each player and each of her
information sets

, her expected payoff to

(, ) is at least as
large as her expected payoff to

(,

), for each of her


behavioral strategies

.
10.4 Beliefs and sequential
equilibrium
The Consistencies of beliefs with strategies is a new requirement. In
a steady state, each players belief must be correct: the probability it
assigns to any history must be the probability with which that history
occurs if the players adhere to their strategies.

The implementation of this idea is somewhat unclear at an information
set not reached if the players follow their strategies: every history has
probability 0 if players follow their strategies. We deal with this difficulty
allowing the player who moves at such an information set to hold any
belief at that information set.

The consistency requirement restrict the belief system only at information
sets reached with positive probability if every player adheres to her
strategy.
Game Theory - A (Short) Introduction 299 9/12/2011
10.4 Beliefs and sequential
equilibrium






By the Bayes rule, this probability is:


Pr (

according to )
according to


Game Theory - A (Short) Introduction 300 9/12/2011
Precisely, the consistency requirement imposes that the probability
assigned to every history

in a information set reached with


positive probability by the belief of the player who moves at that
information set to be equal to the probability that

occurs according
to the strategy profile, conditional on the information sets being
reached.
10.4 Beliefs and sequential
equilibrium
Figure 326.1
If player 1 behavioral strategy assigns probability 1 to action E at
the start of the game, the consistency requirement places no
restriction on player 2 belief (player 2 information set is not reached
if player 1 adheres to her strategy);
If player 2 action at the start of the game assigns positive
probability to C or D, the consistency requirement enters into play:
Denote the probability assigned to C by player 1 strategy and
to D;
Consistency requires that player 2 belief assigns probability
/( +) to C and /( + ) to D.
Game Theory - A (Short) Introduction 301 9/12/2011
10.4 Beliefs and sequential
equilibrium
Example 327.4: Consistency of beliefs in entry game (Figures
317.1 and 323.1)
Denote by

and

the probability that the challenger assigns


to Ready, Unready and Out.
If

= 1, the consistency condition does not restrict the incumbent


belief.
Otherwise, the condition requires that the incumbent assigns
probability

/(

) to Ready and

/(

) to Unready.

Definition 328.1 (Weak sequential equilibrium)
An assessment (, ) (consisting of a behavioral strategy profile
and a belief system ) is a weak sequential equilibrium if it
satisfies the sequential rationality and the weak consistency of
beliefs with strategies.
Game Theory - A (Short) Introduction 302 9/12/2011
10.4 Beliefs and sequential
equilibrium
Figure 326.1
In this game, player 1 strategy EJ is sequentially rational given player 2
strategy G, and player 2 strategy G is sequentially rational given the beliefs
indicated in the Figure and player 1 strategy EJ.
The belief is consistent with the strategy profile (EJ,G), because this profile
does not lead to player 2 information set.
Thus the game has a weak sequential equilibrium.

Note:
In an extensive game with perfect information, only one belief system is
possible (each player believes at each information set that a single
compatible history has occurred with probability 1);
Therefore, in an extensive game with perfect information, the strategy profile
in any weak sequential equilibrium is a subgame perfect equilibrium.
The strategy profile in any weak sequential equilibrium is a Nash equilibrium
(if an assessment is a weak sequential equilibrium, then each players
strategy in the assessment is optimal at the beginning of the game, given the
other players strategies).
Game Theory - A (Short) Introduction 303 9/12/2011
10.4 Beliefs and sequential
equilibrium
How to find weak sequential equilibria?
We can use a combination of techniques for finding subgame
perfect equilibria of extensive games with perfect information and
for finding Nash equilbria of strategic games;
We can find all the Nash equilibria of the game, and then check
which of these equilibria are associated with weak sequential
equilibria.

Figure 326.1
Does the game have a weak sequential equilibrium in which player 1
chooses E?
If player 1 chooses E, player 2 belief is not restricted by consistency;
We need therefore to ask:
Whether any strategy of player 2 makes E optimal for player 1;
Whether there is a belief of player 2 that makes any such strategy
optimal.
Game Theory - A (Short) Introduction 304 9/12/2011
10.4 Beliefs and sequential
equilibrium
We see that:
E is optimal if and only if player 2 chooses F with probability at
most 2/3:
Any such strategy of player 2 is optimal if Player 2
believes the history is C with probability
The strategy of choosing F with probability 0 is optimal if
player 2 believes the history is C with any probability of at
least
Thus: an assessment is a weak sequential equilibrium if player
strategy is EJ and player 2:
Either chooses F with probability at most 2/3 and believes
that the history is C with probability
Or chooses G and believes that the history is C with
probability at least

Game Theory - A (Short) Introduction 305 9/12/2011
10.4 Beliefs and sequential
equilibrium
Example 330.1 (Weak sequential equilibria of entry game, example
317.1)
The entry game has two pure strategy Nash equilibria: (Unready, Acquiesce)
and (Out,Fight)
Consider (Unready,Acquiesce):
Consistency requires that the incumbent believe that the history is
Unready at its information set (because it is the optimal choice for
the challenger), making Acquiesce optimal;
The game has a weak sequential equilibrium in which the strategy
profile is (Unready,Acquiesce) and the incumbent belief is that the
history is Unready;
Consider (Out,Fight)
Regardless of the incumbent belief at its information set, Fight is
not an optimal action in the remainder of the game, for every belief
(Acquiesce yields a higher payoff than Fight).
No assessment in which the strategy profile is (Out,Fight) is both
sequentially rational and consistent.


Game Theory - A (Short) Introduction 306 9/12/2011
10.4 Beliefs and sequential
equilibrium
Why weak sequential equilibrium?
The consistency conditions limitation to information sets reach with
positive probability generates, in some games, a relative large set
of equilibrium assessments;
Some of the equilibrium assessments do not plausibly correspond
to steady states. Consider the following variant of the entry game:
307 9/12/2011
10.4 Beliefs and sequential
equilibrium
In this variant, Ready is better than Unready for the challenger,
regardless of the incumbents action;
This game has a weak sequential equilibrium in which the
challengers strategy is Out, the incumbents strategy is F, and the
incumbent believes at its information set that the history is Unready
(with probability one);
In this equilibrium, the incumbent believes that the challenger has
chosen Unready, although this action is dominated by Ready for
the challenger. This belief seems not reasonable.
Game Theory - A (Short) Introduction 308 9/12/2011
10.5 Signaling games
In many interactions, information is asymmetric: some parties
are more informed than the other ones.

In one interesting class of situations, the informed parties have
the opportunity to take actions observed by uninformed parties
before uninformed parties take actions that affect everyone: the
informed parties actions may signal their information.

Game Theory - A (Short) Introduction 309 9/12/2011
10.5 Signaling games
Example 332.1: Entry as a signaling game.
The challenger is strong with probability and weak with probability
1 (with 0 < < 1).
The challenger knows its type but the incumbent does not.
The challenger may either ready itself for battle or remain unready.
The incumbent observe the challenger readiness but not its type
and chooses either fight or acquiesce.
An unready challenger payoff is 5 if the incumbent acquiesces to its
entry.
Preparations cost a strong challenger 1 unit of payoff and a weak
one 3 units, and fighting entails a loss of 2 units for each type.
The incumbent prefer to fight (payoff 1) rather than to acquiesce to
(payoff 0) a weak challenger and prefer to acquiesce to (payoff 2)
rather than to fight (payoff -1) a strong one.

Game Theory - A (Short) Introduction 310 9/12/2011
10.5 Signaling games
Figure 333.1
Game Theory - A (Short) Introduction 311 9/12/2011
10.5 Signaling games
The Figure 333.1 models this situation:
The empty history is in the center of the diagram
The first move is made by chance (which determines the
challenger type)
Both types have two actions (so the challenger has four
strategies)
The incumbent has two information sets, at each of which it
has two actions (A and F), and thus also four strategies
Searching for pure weak sequential equilibria
Note that a weak challenger prefers Unready to Ready,
regardless of the incumbents actions (even if the incumbent
acquiesces to a ready and fight an unready one). Thus, in any
weak sequential equilibrium, a weak challenger chooses
Unready.

Game Theory - A (Short) Introduction 312 9/12/2011
10.5 Signaling games
Consider each possible action of a strong challenger
Strong challenger chooses Ready
Both the incumbent information sets are reached, so consistency
condition restrict its beliefs at each set;
At the top information set, the incumbent must believe that the
history was (Strong, Ready) with probability one (because a weak
challenger never chooses Ready), and hence choose A:
At the bottom information set, the incumbent must believe that the
history was (Weak, Unready), and hence choose F;
Thus, if the challenger deviates and chooses Unready when he is
strong, he is worse of (he get 3 rather than 4);
We conclude that the game has a weak sequential equilibrium in
which challenger chooses Ready when he is strong and Unready
when he is weak. The incumbent acquiesces when he sees Ready
and fights when he sees Unready.

Game Theory - A (Short) Introduction 313 9/12/2011
10.5 Signaling games
Strong challenger chooses Unready
At his bottom information set, the incumbent believes, by
consistency, that the history was (Strong, Unready) with
probability p and (Weak, Unready) with probability (1-p).
Thus, his expected payoff:
To A = p (2) + (1-p) 0 = 2 p
To F = p (-1) + (1-p) 1 = 1 2 p
A is therefore optimal if
1
4
and F is optimal if
1
4
.

Game Theory - A (Short) Introduction 314 9/12/2011
10.5 Signaling games
Suppose that
1
4
and the incumbent chooses A in
response to Unready:
A strong challenger who chooses Unready obtains
the payoff of 5;
If he switches to Ready, his payoff is less than 5
regardless of the incumbent action;
Thus, if
1
4
, the game has a weak sequential
equilibrium in which both types of challenger choose
Unready and the incumbent acquiesces to an
unready challenger. The incumbent may hold any
belief about the type of a ready challenger, and,
depending on his belief, may fight or acquiesce.
Game Theory - A (Short) Introduction 315 9/12/2011
10.5 Signaling games
Now suppose that
1
4
and the incumbent chooses F in response to
Unready:
A strong challenger who chooses Unready obtains the payoff of 3. If he
switches to Ready, his payoff is 2 if the incumbent fights and 4 if he
acquiesces. Thus, for an equilibrium, the incumbent must fight a ready
challenger.
If the incumbent believes that a ready challenger is weak with high
enough probability (at least ), fighting is indeed optimal.
Is such a believe an equilibrium? Yes: the consistency condition does
not restrict the incumbents belief upon observing Ready because this
action is not taken when the challenger follows his strategy to choose
Unready regardless of his type.
Thus, if
1
4
, the game has a weak sequential equilibrium in which:
Both types of challenger choose Unready
The incumbents fights regardless of the challengers action;
The incumbent assigns probability of at least to the challengers
being weak if it observes that the challenger is ready for battle.
Game Theory - A (Short) Introduction 316 9/12/2011
10.5 Signaling games
This example shows that two kinds of pure strategy equilibrium
may exist in signaling games:

Separating equilibrium : each type of sender (of the signal)
chooses a different action so that, upon observing the senders
action, the receiver (of the signal) knows the senders type;

Pooling equilibrium : all types of the sender choose the same
action, so that the senders action gives the receiver no clue to the
senders type.

Note: if the sender has more than two types, mixtures of these
types of equilibrium may exist (the set of types may be divided
into groups, within each of which all types choose the same
action and between which the actions are different).
Game Theory - A (Short) Introduction 317 9/12/2011
10.8 Strategic information
transmission
The situation
You research the market for new product and submit a report to
your boss, who decides which product to develop;
Your preferences differ from from those of your boss:
You are interested in promoting the interest of your division;
Your boss is interested in promoting the interest of the whole
firm.
If you report the results of you research without distortion, the
product your boss will choose is not the best for you.
If you systematically distort your findings, your boss will be able to
unravel your report and deduce your actual findings.
Obfuscation seems therefore a more promising route.
Game Theory - A (Short) Introduction 318 9/12/2011
10.8 Strategic information
transmission
The model
A sender (you) observes the state , a number between 0 and 1,
that a receiver (the boss) can not see;
The distribution of the state is uniform: the probability: Pr =
;
The sender submit a report (a number) to the receiver;
The receiver observes the report and takes an action (a number);
The payoff functions are:
Sender: +
2

Receiver:
2

Where (the sender bias) is a fixed number that reflects the
divergence between the sender and the receiver preferences.
Note that the receiver optimal action is = and the sender
optimal action is = + (see Figure 343.1).
Game Theory - A (Short) Introduction 319 9/12/2011
10.8 Strategic information
transmission
Figure 343.1 (players payoff functions)
Game Theory - A (Short) Introduction 320 9/12/2011
10.8 Strategic information
transmission
10.8.1 Perfect information transmission?
Consider an equilibrium in which the sender accurately reports the
state he observes: =
Given this strategy, the consistency condition requires that the
receiver believe (correctly) that the state is when the sender
reports . The receiver hence optimally chooses the action (the
maximum of
2
).
Is the senders strategy the best response to the receiver strategy?
Not if > 0. Suppose the state is . If the sender reports , the
receiver chooses = and the sender payoff is
2
. If the sender
chooses instead + , the receiver chooses = + and the
sender payoff is 0.
So, unless the sender and the receiver preferences are the same
( = 0), the game has no equilibrium in which the sender accurately
report the state.
Game Theory - A (Short) Introduction 321 9/12/2011
10.8 Strategic information
transmission
10.8.2 No information transmission?
Consider an equilibrium in which the sender reports a constant
value: = .
The consistency condition requires that if the receiver observes a
report , her belief must remain the same as it was initially (state
uniformly distributed between 0 and 1). The expected value of is
then =
1
2
and his optimal action (the action that maximizes the
expected payoff) is =
1
2
.
The consistency condition does not constraint the receiver belief
about the state upon receiving a report different from : such a
report does not occur if the sender follows her strategy.
Note also that if the receiver simply ignores completely the sender
report, his optimal action remains the same. Because the sender
reports has no effect on the receiver optimal action, any constant
report is optimal for him and, in particular, = is optimal.
Game Theory - A (Short) Introduction 322 9/12/2011
10.8 Strategic information
transmission
In summary, for every value of , the game has a weak sequential
equilibrium in which the senders report conveys no information
(constant report), the receiver ignores the report (he maintains his
initial belief about the state) and takes the action that maximizes his
expected payoff.
If is small, this equilibrium is not very attractive for both the
sender and the receiver. For example, if =
1
4
, for any with
0
1
4
, both the sender and the receiver are better off if the
receiver action is +.


Game Theory - A (Short) Introduction 323 9/12/2011
10.8 Strategic information
transmission
10.8.3 Some information transmission
Does the game has equilibria in which some information is
transmitted?
Suppose the sender makes one of two reports:

1
if 0
1


2
if
1
1
With
1

2

Consider the receiver optimal response to this strategy:
If he sees the report
1
, the consistency condition requires that
he now believe that the is uniformly distributed between 0 and

1
. His optimal action is the =
1
2

1

Similarly, if he sees the report
2
, the consistency condition
requires that he now believe that the is uniformly distributed
between
1
and 1. His optimal action is the =
1
2
(1 +
1
)


Game Theory - A (Short) Introduction 324 9/12/2011
10.8 Strategic information
transmission
The consistency condition does not restrict the receiver belief if
he sees a report other than
1
or
2
. Assume therefore that for
any such report, the receiver belief is one of the two beliefs he
hold if he sees
1
or
2
(so the optimal action is either =
1
2

1

or =
1
2
(1 +
1
).
Now, for equilibrium, we need the sender report
1
to be
optimal if 0
1
and his report
2
to be optimal if
1
1,
given the receiver strategy.
By changing his report, the sender can change the receiver
optimal action form
1
2

1
to
1
2
(1 +
1
). So, for the report
1
to be
optimal when 0
1
, the sender must like
1
2

1
at least as
much as
1
2
(1 +
1
) (and vice-versa for the report
2
).
In particular, in state
1
, the sender must be indifferent
between the two actions
1
2

1
and
1
2
(1 +
1
):

Game Theory - A (Short) Introduction 325 9/12/2011
10.8 Strategic information
transmission
This indifference implies that
1
+ (the sender preferred
action) is midway between
1
2

1
and
1
2
(1 +
1
) (the receiver
optimal actions). So (see Figure 346.1):

1
+ =
1
2
1
2

1
+
1
2
(1 +
1
)

1
=
1
2
2
Game Theory - A (Short) Introduction 326 9/12/2011
Figure 346.1
10.8 Strategic information
transmission
We need
1
> 0: this condition is satisfied only if <
1
4
. If
1
4
,
the game has no equilibrium in which the sender makes two
different reports. Put differently, if preferences diverges too
much, there is no point to ask the sender to submit a report.
The receiver should simply take the best action for himself
given his prior belief.

1
=
1
2
2 is not only a necessary condition for equilibrium
but also a sufficient condition. Indeed, in such a case:
In every state with 0 <
1
: the sender optimally report

1

In every state with
1
1: the sender optimally report

2

1

This follows form the shape of payoff function, which is
symmetric (see Figure 346.2)
Game Theory - A (Short) Introduction 327 9/12/2011
10.8 Strategic information
transmission









Game Theory - A (Short) Introduction 328 9/12/2011
Figure 346.1
10.8 Strategic information
transmission
This equilibrium is better for both the receiver and the sender
than the one in which no information is transmitted. Consider
the receiver:
If no information is transmitted, he takes action in all states and
his payoff is in each state
1
2

2

In this two reports equilibrium, his payoff is:

1
2

1

2
for 0 <
1


1
2

1
+1
2
for
1
1
Game Theory - A (Short) Introduction 329 9/12/2011
10.8 Strategic information
transmission
10.8.4 How much information transmission?
For <
1
4
, does the game have equilibria In which more information
is transmitted than in the two reports equilibrium?
Consider an equilibrium in which the sender makes one of K
reports, depending on the state. Specifically, the senders report is:

1
if 0 <
1


2
if
1
<
2


if
1
< 1
Where

for .
The equilibrium analysis follows the same line as the two reports
equilibrium.
Game Theory - A (Short) Introduction 330 9/12/2011
10.8 Strategic information
transmission
Specifically:
If the receiver observes the report

, then the consistency


condition requires that he believes the state to be uniformly
distributed between
1
and

. Therefore, he optimally takes


the action
1
2
(
1
+

).
If he observes a report different from any

, the consistency
condition does not restrict his belief. We assume that his belief
in such case is the belief he holds upon receiving one of the
reports

.
Now, for equilibrium, we need the sender report

to be
optimal when the state is with
1
<

, for = 1, .
A sufficient condition for optimality is that, in each state

,
= 1, , the sender be indifferent between the between the
reports

and
+1
and, therefore, between the receiver
actions
1
2
(
1
+

) and
1
2
(

+
+1
).
Game Theory - A (Short) Introduction 331 9/12/2011
10.8 Strategic information
transmission
This indifference implies that

+ is equal to the average of


1
2
(
1
+

) and
1
2
(

+
+1
):

+ =
1
2
1
2

1
+

+
1
2
(

+
+1
)
Or

+1


1
+4
This is to say that the interval of states for which the
senders report is
+1
is longer by 4 than the interval for
which the report is

.
The length of the first interval, from 0 to
1
, is
1
. The sum of
the lengths of all interval must be equal to one:

1
+
1
+4 ++
1
+ 1 4 = 1
Or

1
+4 1 +2 ++ 1 = 1
Game Theory - A (Short) Introduction 332 9/12/2011
10.8 Strategic information
transmission
The sum of the first positive integer is
1
2
+1 :

1
+2 1 = 1
If is small enough for 2 1 < 1, there is a positive value
of
1
that satisfies the equation:
If
1
24
<
1
12
, the inequality is satisfied for 3
So, in the equilibrium in which more information is transmitted,
the sender chooses one of three reports.
From
1
+2 1 = 1, we have
1
=
1
3
4 and

2
=
2
3
4.
The Figure 348.2 shows equilibrium action taken by the
receiver as a function of the state .
The values of the reports

does not matter as long as no two


are the same (we think of them as words in a language).

Game Theory - A (Short) Introduction 333 9/12/2011
10.8 Strategic information
transmission
Figure 348.2
Game Theory - A (Short) Introduction 334 9/12/2011
10.8 Strategic information
transmission
In summary:
If there is a positive value of
1
that satisfies
1
+2 1 = 1,
then the game has a weak sequential equilibrium in which the
sender submits one of different reports, depending on the state.
For any given value of , the largest value of for which an
equilibrium exists is the largest value for which 2 1 < 1.
If 2 1 = 1, using the quadratic formula, we have =
1
2
(1 +
1 +
2

). Thus the largest the value of , the smaller the largest


value of possible in an equilibrium.

Game Theory - A (Short) Introduction 335 9/12/2011
The greater the difference between the sender and receiver
preferences, the coarser the information transmitted in the
equilibrium with the largest number of steps (the most informative
equilibrium).

S-ar putea să vă placă și