Sunteți pe pagina 1din 251

Game Theory

Critical Concepts

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.
PDF generated at: Sat, 17 Dec 2011 02:14:34 UTC
Contents
Articles
Introduction 1
Game theory 1
Nash equilibrium 21

Definitions 33
Cooperative game 33
Information set 40
Preference 42
Normal-form game 43
Extensive-form game 46
Succinct game 52

Equilibrium Concepts 58
Trembling hand perfect equilibrium 58
Proper equilibrium 61
Evolutionarily stable strategy 63
Risk dominance 69
Self-confirming equilibrium 72

Strategies 73
Dominance 73
Strategy 76
Tit for tat 78
Grim trigger 82
Collusion 83
Backward induction 86
Markov strategy 88

Game Classes 89
Symmetric game 89
Perfect information 91
Simultaneous game 92
Sequential game 92
Repeated game 92
Signaling games 94
Cheap talk 98
Zero-sum 99
Mechanism design 102
Bargaining Problem 110
Stochastic game 113
Large poisson game 115
Nontransitive game 116
Global game 117

Games 118
Prisoner's dilemma 118
Traveler's dilemma 128
Coordination game 130
Chicken 133
Centipede game 141
Volunteer's dilemma 144
Dollar auction 145
Battle of the sexes 146
Stag hunt 150
Matching pennies 152
Ultimatum game 154
Rock-paper-scissors 160
Pirate game 171
Dictator game 172
Public goods game 174
Blotto games 177
War of attrition 178
El Farol Bar problem 180
Fair division 182
Cournot competition 187
Deadlock 193
Unscrupulous diner's dilemma 194
Guess 2/3 of the average 195
Kuhn poker 197
Nash bargaining game 198
Screening game 201
Princess and monster game 201
Theorems 203
Minimax 203
Purification theorem 208
Folk theorem 210
Revelation principle 211
Arrow's impossibility theorem 212

Additional Reading 222


Tragedy of the commons 222
Tyranny of small decisions 232
All-pay auction 236
List of games in game theory 237

References
Article Sources and Contributors 240
Image Sources, Licenses and Contributors 245

Article Licenses
License 247
1

Introduction

Game theory
Game theory is a mathematical method for analyzing calculated circumstances, such as in games, where a person’s
success is based upon the choices of others. More formally, it is "the study of mathematical models of conflict and
cooperation between intelligent rational decision-makers."[1] An alternative term suggested "as a more descriptive
name for the discipline" is interactive decision theory.[2] Game theory is mainly used in economics, political science,
and psychology, and other, more prescribed sciences, like logic or biology. The subject first addressed zero-sum
games, such that one person's gains exactly equal net losses of the other participant(s). Today, however, game theory
applies to a wide range of class relations, and has developed into an umbrella term for the logical side of science, to
include both human and non-humans, like computers. Classic uses include a sense of balance in numerous games,
where each person has found or developed a tactic that cannot successfully better his results, given the other
approach.
Mathematical game theory had beginnings with some publications by Émile Borel, which led to his book
Applications aux Jeux de Hasard. However, his results were limited, and the theory regarding the non-existence of
blended-strategy equilibrium in two-player games was incorrect. Modern game theory began with the idea regarding
the existence of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von
Neumann's original proof used Brouwer's fixed-point theorem on continuous mappings into compact convex sets,
which became a standard method in game theory and mathematical economics. His paper was followed by his 1944
book Theory of Games and Economic Behavior, with Oskar Morgenstern, which considered cooperative games of
several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed
mathematical statisticians and economists to treat decision-making under uncertainty.
This theory was developed extensively in the 1950s by many scholars. Game theory was later explicitly applied to
biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been
widely recognized as an important tool in many fields. Eight game-theorists have won the Nobel Memorial Prize in
Economic Sciences, and John Maynard Smith was awarded the Crafoord Prize for his application of game theory to
biology.
Game theory 2

History
Early discussions of examples of two-person games occurred long
before the rise of modern, mathematical game theory. The first known
discussion of game theory occurred in a letter written by James
Waldegrave in 1713. In this letter, Waldegrave provides a minimax
mixed strategy solution to a two-person version of the card game le
Her. James Madison made what we now recognize as a game-theoretic
analysis of the ways states can be expected to behave under different
systems of taxation.[3] [4] In his 1838 Recherches sur les principes
mathématiques de la théorie des richesses (Researches into the
Mathematical Principles of the Theory of Wealth), Antoine Augustin
Cournot considered a duopoly and presents a solution that is a
restricted version of the Nash equilibrium.

The Danish mathematician Zeuthen proved that a mathematical model


has a winning strategy by using Brouwer's fixed point theorem. In his
1938 book Applications aux Jeux de Hasard and earlier notes, Émile John von Neumann
Borel proved a minimax theorem for two-person zero-sum matrix
games only when the pay-off matrix was symmetric. Borel conjectured that non-existence of a mixed-strategy
equilibria in two-person zero-sum games would occur, a conjecture that was proved false.

Game theory did not really exist as a unique field until John von Neumann published a paper in 1928.[5] Von
Neumann's original proof used Brouwer's fixed-point theorem on continuous mappings into compact convex sets,
which became a standard method in game theory and mathematical economics. His paper was followed by his 1944
book Theory of Games and Economic Behavior, with Oskar Morgenstern, which considered cooperative games of
several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed
mathematical statisticians and economists to treat decision-making under uncertainty. Von Neumann's work in game
theory culminated in the 1944 book Theory of Games and Economic Behavior by von Neumann and Oskar
Morgenstern. This foundational work contains the method for finding mutually consistent solutions for two-person
zero-sum games. During this time period, work on game theory was primarily focused on cooperative game theory,
which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between
them about proper strategies.[6]
In 1950, the first discussion of the prisoner's dilemma appeared, and an experiment was undertaken on this game at
the RAND corporation. Around this same time, John Nash developed a criterion for mutual consistency of players'
strategies, known as Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von
Neumann and Morgenstern. This equilibrium is sufficiently general to allow for the analysis of non-cooperative
games in addition to cooperative ones.
Game theory experienced a flurry of activity in the 1950s, during which time the concepts of the core, the extensive
form game, fictitious play, repeated games, and the Shapley value were developed. In addition, the first applications
of Game theory to philosophy and political science occurred during this time.
In 1965, Reinhard Selten introduced his solution concept of subgame perfect equilibria, which further refined the
Nash equilibrium (later he would introduce trembling hand perfection as well). In 1967, John Harsanyi developed
the concepts of complete information and Bayesian games. Nash, Selten and Harsanyi became Economics Nobel
Laureates in 1994 for their contributions to economic game theory.
In the 1970s, game theory was extensively applied in biology, largely as a result of the work of John Maynard Smith
and his evolutionarily stable strategy. In addition, the concepts of correlated equilibrium, trembling hand perfection,
and common knowledge[7] were introduced and analyzed.
Game theory 3

In 2005, game theorists Thomas Schelling and Robert Aumann followed Nash, Selten and Harsanyi as Nobel
Laureates. Schelling worked on dynamic models, early examples of evolutionary game theory. Aumann contributed
more to the equilibrium school, introducing an equilibrium coarsening, correlated equilibrium, and developing an
extensive formal analysis of the assumption of common knowledge and of its consequences.
In 2007, Leonid Hurwicz, together with Eric Maskin and Roger Myerson, was awarded the Nobel Prize in
Economics "for having laid the foundations of mechanism design theory." Myerson's contributions include the
notion of proper equilibrium, and an important graduate text: Game Theory, Analysis of Conflict (Myerson 1997).
Hurwicz introduced and formalized the concept of incentive compatibility.

Representation of games
The games studied in game theory are well-defined mathematical objects. A game consists of a set of players, a set
of moves (or strategies) available to those players, and a specification of payoffs for each combination of strategies.
Most cooperative games are presented in the characteristic function form, while the extensive and the normal forms
are used to define noncooperative games.

Extensive form
The extensive form can be used to formalize games with a time
sequencing of moves. Games here are played on trees (as pictured to
the left). Here each vertex (or node) represents a point of choice for a
player. The player is specified by a number listed by the vertex. The
lines out of the vertex represent a possible action for that player. The
payoffs are specified at the bottom of the tree. The extensive form can
An extensive form game be viewed as a multi-player generalization of a decision tree.
(Fudenberg & Tirole 1991, p. 67)

In the game pictured to the left, there are two players. Player 1 moves first and chooses either F or U. Player 2 sees
Player 1's move and then chooses A or R. Suppose that Player 1 chooses U and then Player 2 chooses A, then Player
1 gets 8 and Player 2 gets 2.
The extensive form can also capture simultaneous-move games and games with imperfect information. To represent
it, either a dotted line connects different vertices to represent them as being part of the same information set (i.e., the
players do not know at which point they are), or a closed line is drawn around them. (See example in the imperfect
information section.)

Normal form

Player 2 Player 2
chooses Left chooses Right

Player 1 4, 3 –1, –1
chooses Up

Player 1 0, 0 3, 4
chooses Down

Normal form or payoff matrix of a 2-player,


2-strategy game

The normal (or strategic form) game is usually represented by a matrix which shows the players, strategies, and
payoffs (see the example to the right). More generally it can be represented by any function that associates a payoff
for each player with every possible combination of actions. In the accompanying example there are two players; one
Game theory 4

chooses the row and the other chooses the column. Each player has two strategies, which are specified by the number
of rows and the number of columns. The payoffs are provided in the interior. The first number is the payoff received
by the row player (Player 1 in our example); the second is the payoff for the column player (Player 2 in our
example). Suppose that Player 1 plays Up and that Player 2 plays Left. Then Player 1 gets a payoff of 4, and Player 2
gets 3.
When a game is presented in normal form, it is presumed that each player acts simultaneously or, at least, without
knowing the actions of the other. If players have some information about the choices of other players, the game is
usually presented in extensive form.
Every extensive-form game has an equivalent normal-form game, however the transformation to normal form may
result in an exponential blowup in the size of the representation, making it computationally impractical.
(Leyton-Brown & Shoham 2008, p. 35)

Characteristic function form


In games that possess removable utility separate rewards are not given; rather, the characteristic function decides the
payoff of each unity. The idea is that the unity that is 'empty', so to speak, does not receive a reward at all.
The origin of this form is to be found in John von Neumann and Oskar Morgenstern's book; when looking at these
instances, they guessed that when a union C appears, it works against the fraction (N/C) as if two individuals were
playing a normal game. The balanced payoff of C is a basic function. Although there are differing examples that help
determine coalitional amounts from normal games, not all appear that in their function form can be derived from
such.
Formally, a characteristic function is seen as: (N,v), where N represents the group of people and v:2^N-->R is a
normal utility.
Such characteristic functions have expanded to describe games where there is no removable utility.

Partition function form


The characteristic function form ignores the possible externalities of coalition formation. In the partition function
form the payoff of a coalition depends not only on its members, but also on the way the rest of the players are
partitioned (Thrall & Lucas 1963).

General and applied uses


As a method of applied mathematics, game theory has been used to study a wide variety of human and animal
behaviors. It was initially developed in economics to understand a large collection of economic behaviors, including
behaviors of firms, markets, and consumers. The use of game theory in the social sciences has expanded, and game
theory has been applied to political, sociological, and psychological behaviors as well.
Game-theoretic analysis was initially used to study animal behavior by Ronald Fisher in the 1930s (although even
Charles Darwin makes a few informal game-theoretic statements). This work predates the name "game theory", but it
shares many important features with this field. The developments in economics were later applied to biology largely
by John Maynard Smith in his book Evolution and the Theory of Games.
In addition to being used to describe, predict, and explain behavior, game theory has also been used to develop
theories of ethical or normative behavior and to prescribe such behavior.[8] In economics and philosophy, scholars
have applied game theory to help in the understanding of good or proper behavior. Game-theoretic arguments of this
type can be found as far back as Plato.[9]
Game theory 5

Description and modeling


The first known use is to describe and model
how human populations behave. Some
scholars believe that by finding the
equilibria of games they can predict how
actual human populations will behave when
confronted with situations analogous to the
game being studied. This particular view of
A three stage Centipede Game
game theory has come under recent
criticism. First, it is criticized because the
assumptions made by game theorists are often violated. Game theorists may assume players always act in a way to
directly maximize their wins (the Homo economicus model), but in practice, human behavior often deviates from
this model. Explanations of this phenomenon are many; irrationality, new models of deliberation, or even different
motives (like that of altruism). Game theorists respond by comparing their assumptions to those used in physics.
Thus while their assumptions do not always hold, they can treat game theory as a reasonable scientific ideal akin to
the models used by physicists. However, additional criticism of this use of game theory has been levied because
some experiments have demonstrated that individuals do not play equilibrium strategies. For instance, in the
centipede game, guess 2/3 of the average game, and the dictator game, people regularly do not play Nash equilibria.
There is an ongoing debate regarding the importance of these experiments.[10]

Alternatively, some authors claim that Nash equilibria do not provide predictions for human populations, but rather
provide an explanation for why populations that play Nash equilibria remain in that state. However, the question of
how populations reach those points remains open.
Some game theorists have turned to evolutionary game theory in order to resolve these issues. These models
presume either no rationality or bounded rationality on the part of players. Despite the name, evolutionary game
theory does not necessarily presume natural selection in the biological sense. Evolutionary game theory includes
both biological as well as cultural evolution and also models of individual learning (for example, fictitious play
dynamics).

Prescriptive or normative analysis

Cooperate Defect

Cooperate -1, -1
-10,
0
Defect 0, -10 -5,
-5
The Prisoner's Dilemma

On the other hand, some scholars see game theory not as a predictive tool for the behavior of human beings, but as a
suggestion for how people ought to behave. Since a Nash equilibrium of a game constitutes one's best response to the
actions of the other players, playing a strategy that is part of a Nash equilibrium seems appropriate. However, this
use for game theory has also come under criticism. First, in some cases it is appropriate to play a non-equilibrium
strategy if one expects others to play non-equilibrium strategies as well. For an example, see Guess 2/3 of the
average.
Second, the Prisoner's dilemma presents another potential counterexample. In the Prisoner's Dilemma, each player
pursuing his own self-interest leads both players to be worse off than had they not pursued their own self-interests.
Game theory 6

Economics and business


Game theory is a major method used in mathematical economics and business for modeling competing behaviors of
interacting agents.[11] Applications include a wide array of economic phenomena and approaches, such as auctions,
bargaining, fair division, duopolies, oligopolies, social network formation, agent-based computational economics,[12]
general equilibrium, mechanism design,[13] and voting systems,[14] and across such broad areas as experimental
economics,[15] behavioral economics,[16] information economics,[17] industrial organization,[18] and political
economy.[19] [20]
This research usually focuses on particular sets of strategies known as equilibria in games. These "solution concepts"
are usually based on what is required by norms of rationality. In non-cooperative games, the most famous of these is
the Nash equilibrium. A set of strategies is a Nash equilibrium if each represents a best response to the other
strategies. So, if all the players are playing the strategies in a Nash equilibrium, they have no unilateral incentive to
deviate, since their strategy is the best they can do given what others are doing.
The payoffs of the game are generally taken to represent the utility of individual players. Often in modeling
situations the payoffs represent money, which presumably corresponds to an individual's utility. This assumption,
however, can be faulty.
A prototypical paper on game theory in economics begins by presenting a game that is an abstraction of some
particular economic situation. One or more solution concepts are chosen, and the author demonstrates which strategy
sets in the presented game are equilibria of the appropriate type. Naturally one might wonder to what use should this
information be put. Economists and business professors suggest two primary uses (noted above): descriptive and
prescriptive.[8]

Political science
The application of game theory to political science is focused in the overlapping areas of fair division, political
economy, public choice, war bargaining, positive political theory, and social choice theory. In each of these areas,
researchers have developed game-theoretic models in which the players are often voters, states, special interest
groups, and politicians.
For early examples of game theory applied to political science, see the work of Anthony Downs. In his book An
Economic Theory of Democracy (Downs 1957), he applies the Hotelling firm location model to the political process.
In the Downsian model, political candidates commit to ideologies on a one-dimensional policy space. The theorist
shows how the political candidates will converge to the ideology preferred by the median voter.
A game-theoretic explanation for democratic peace is that public and open debate in democracies send clear and
reliable information regarding their intentions to other states. In contrast, it is difficult to know the intentions of
nondemocratic leaders, what effect concessions will have, and if promises will be kept. Thus there will be mistrust
and unwillingness to make concessions if at least one of the parties in a dispute is a non-democracy (Levy &
Razin 2003).

Biology
Game theory 7

Hawk Dove

Hawk 20, 80,


20 40
Dove 40, 60,
80 60
The hawk-dove
game

Unlike economics, the payoffs for games in biology are often interpreted as corresponding to fitness. In addition, the
focus has been less on equilibria that correspond to a notion of rationality, but rather on ones that would be
maintained by evolutionary forces. The best known equilibrium in biology is known as the evolutionarily stable
strategy (or ESS), and was first introduced in (Smith & Price 1973). Although its initial motivation did not involve
any of the mental requirements of the Nash equilibrium, every ESS is a Nash equilibrium.
In biology, game theory has been used to understand many different phenomena. It was first used to explain the
evolution (and stability) of the approximate 1:1 sex ratios. (Fisher 1930) suggested that the 1:1 sex ratios are a result
of evolutionary forces acting on individuals who could be seen as trying to maximize their number of grandchildren.
Additionally, biologists have used evolutionary game theory and the ESS to explain the emergence of animal
communication (Harper & Maynard Smith 2003). The analysis of signaling games and other communication games
has provided some insight into the evolution of communication among animals. For example, the mobbing behavior
of many species, in which a large number of prey animals attack a larger predator, seems to be an example of
spontaneous emergent organization. Ants have also been shown to exhibit feed-forward behavior akin to fashion, see
Butterfly Economics.
Biologists have used the game of chicken to analyze fighting behavior and territoriality.
Maynard Smith, in the preface to Evolution and the Theory of Games, writes, "paradoxically, it has turned out that
game theory is more readily applied to biology than to the field of economic behaviour for which it was originally
designed". Evolutionary game theory has been used to explain many seemingly incongruous phenomena in
nature.[21]
One such phenomenon is known as biological altruism. This is a situation in which an organism appears to act in a
way that benefits other organisms and is detrimental to itself. This is distinct from traditional notions of altruism
because such actions are not conscious, but appear to be evolutionary adaptations to increase overall fitness.
Examples can be found in species ranging from vampire bats that regurgitate blood they have obtained from a night's
hunting and give it to group members who have failed to feed, to worker bees that care for the queen bee for their
entire lives and never mate, to Vervet monkeys that warn group members of a predator's approach, even when it
endangers that individual's chance of survival.[22] All of these actions increase the overall fitness of a group, but
occur at a cost to the individual.
Evolutionary game theory explains this altruism with the idea of kin selection. Altruists discriminate between the
individuals they help and favor relatives. Hamilton's rule explains the evolutionary reasoning behind this selection
with the equation c<b*r where the cost ( c ) to the altruist must be less than the benefit ( b ) to the recipient
multiplied by the coefficient of relatedness ( r ). The more closely related two organisms are causes the incidences of
altruism to increase because they share many of the same alleles. This means that the altruistic individual, by
ensuring that the alleles of its close relative are passed on, (through survival of its offspring) can forgo the option of
having offspring itself because the same number of alleles are passed on. Helping a sibling for example (in diploid
animals), has a coefficient of ½, because (on average) an individual shares ½ of the alleles in its sibling's offspring.
Ensuring that enough of a sibling’s offspring survive to adulthood precludes the necessity of the altruistic individual
producing offspring.[22] The coefficient values depend heavily on the scope of the playing field; for example if the
choice of whom to favor includes all genetic living things, not just all relatives, we assume the discrepancy between
Game theory 8

all humans only accounts for approximately 1% of the diversity in the playing field, a co-efficient that was ½ in the
smaller field becomes 0.995. Similarly if it is considered that information other than that of a genetic nature (e.g.
epigenetics, religion, science, etc.) persisted through time the playing field becomes larger still, and the discrepancies
smaller.

Computer science and logic


Game theory has come to play an increasingly important role in logic and in computer science. Several logical
theories have a basis in game semantics. In addition, computer scientists have used games to model interactive
computations. Also, game theory provides a theoretical basis to the field of multi-agent systems.
Separately, game theory has played a role in online algorithms. In particular, the k-server problem, which has in the
past been referred to as games with moving costs and request-answer games (Ben David, Borodin & Karp et
al. 1994). Yao's principle is a game-theoretic technique for proving lower bounds on the computational complexity
of randomized algorithms, and especially of online algorithms.
The emergence of the internet has motivated the development of algorithms for finding equilibria in games, markets,
computational auctions, peer-to-peer systems, and security and information markets. Algorithmic game theory[23]
and within it algorithmic mechanism design[24] combine computational algorithm design and analysis of complex
systems with economic theory.[25]

Philosophy

Stag Hare

Stag 3, 0,
3 2
Hare 2, 2,
0 2
Stag hunt

Game theory has been put to several uses in philosophy. Responding to two papers by W.V.O. Quine (1960, 1967),
Lewis (1969) used game theory to develop a philosophical account of convention. In so doing, he provided the first
analysis of common knowledge and employed it in analyzing play in coordination games. In addition, he first
suggested that one can understand meaning in terms of signaling games. This later suggestion has been pursued by
several philosophers since Lewis (Skyrms (1996), Grim, Kokalis, and Alai-Tafti et al. (2004)). Following Lewis
(1969) game-theoretic account of conventions, Ullmann Margalit (1977) and Bicchieri (2006) have developed
theories of social norms that define them as Nash equilibria that result from transforming a mixed-motive game into
a coordination game.[26]
Game theory has also challenged philosophers to think in terms of interactive epistemology: what it means for a
collective to have common beliefs or knowledge, and what are the consequences of this knowledge for the social
outcomes resulting from agents' interactions. Philosophers who have worked in this area include Bicchieri (1989,
1993),[27] Skyrms (1990),[28] and Stalnaker (1999).[29]
In ethics, some authors have attempted to pursue the project, begun by Thomas Hobbes, of deriving morality from
self-interest. Since games like the Prisoner's dilemma present an apparent conflict between morality and self-interest,
explaining why cooperation is required by self-interest is an important component of this project. This general
strategy is a component of the general social contract view in political philosophy (for examples, see Gauthier (1986)
and Kavka (1986).[30]
Other authors have attempted to use evolutionary game theory in order to explain the emergence of human attitudes
about morality and corresponding animal behaviors. These authors look at several games including the Prisoner's
Game theory 9

dilemma, Stag hunt, and the Nash bargaining game as providing an explanation for the emergence of attitudes about
morality (see, e.g., Skyrms (1996, 2004) and Sober and Wilson (1999)).
Some assumptions used in some parts of game theory have been challenged in philosophy; psychological egoism
states that rationality reduces to self-interest—a claim debated among philosophers. (see Psychological
egoism#Criticisms)

Types of games

Cooperative or non-cooperative
A game is cooperative if the players are able to form binding commitments. For instance the legal system requires
them to adhere to their promises. In noncooperative games this is not possible.
Often it is assumed that communication among players is allowed in cooperative games, but not in noncooperative
ones. This classification on two binary criteria has been rejected (Harsanyi 1974).
Of the two types of games, noncooperative games are able to model situations to the finest details, producing
accurate results. Cooperative games focus on the game at large. Considerable efforts have been made to link the two
approaches. The so-called Nash-programme has already established many of the cooperative solutions as
noncooperative equilibria.
Hybrid games contain cooperative and non-cooperative elements. For instance, coalitions of players are formed in a
cooperative game, but these play in a non-cooperative fashion.

Symmetric and asymmetric

E F

E 1, 0,
2 0
F 0, 1,
0 2
An asymmetric
game

A symmetric game is a game where the payoffs for playing a particular strategy depend only on the other strategies
employed, not on who is playing them. If the identities of the players can be changed without changing the payoff to
the strategies, then a game is symmetric. Many of the commonly studied 2×2 games are symmetric. The standard
representations of chicken, the prisoner's dilemma, and the stag hunt are all symmetric games. Some scholars would
consider certain asymmetric games as examples of these games as well. However, the most common payoffs for
each of these games are symmetric.
Most commonly studied asymmetric games are games where there are not identical strategy sets for both players.
For instance, the ultimatum game and similarly the dictator game have different strategies for each player. It is
possible, however, for a game to have identical strategies for both players, yet be asymmetric. For example, the game
pictured to the right is asymmetric despite having identical strategy sets for both players.
Game theory 10

Zero-sum and non-zero-sum

A B

A –1, 3,
1 –3
B 0, –2,
0 2
A zero-sum
game

Zero-sum games are a special case of constant-sum games, in which choices by players can neither increase nor
decrease the available resources. In zero-sum games the total benefit to all players in the game, for every
combination of strategies, always adds to zero (more informally, a player benefits only at the equal expense of
others). Poker exemplifies a zero-sum game (ignoring the possibility of the house's cut), because one wins exactly
the amount one's opponents lose. Other zero-sum games include matching pennies and most classical board games
including Go and chess.
Many games studied by game theorists (including the famous prisoner's dilemma) are non-zero-sum games, because
some outcomes have net results greater or less than zero. Informally, in non-zero-sum games, a gain by one player
does not necessarily correspond with a loss by another.
Constant-sum games correspond to activities like theft and gambling, but not to the fundamental economic situation
in which there are potential gains from trade. It is possible to transform any game into a (possibly asymmetric)
zero-sum game by adding an additional dummy player (often called "the board"), whose losses compensate the
players' net winnings.

Simultaneous and sequential


Simultaneous games are games where both players move simultaneously, or if they do not move simultaneously, the
later players are unaware of the earlier players' actions (making them effectively simultaneous). Sequential games (or
dynamic games) are games where later players have some knowledge about earlier actions. This need not be perfect
information about every action of earlier players; it might be very little knowledge. For instance, a player may know
that an earlier player did not perform one particular action, while he does not know which of the other available
actions the first player actually performed.
The difference between simultaneous and sequential games is captured in the different representations discussed
above. Often, normal form is used to represent simultaneous games, and extensive form is used to represent
sequential ones. The transformation of extensive to normal form is one way, meaning that multiple extensive form
games correspond to the same normal form. Consequently, notions of equilibrium for simultaneous games are
insufficient for reasoning about sequential games; see subgame perfection.
Game theory 11

Perfect information and imperfect information


An important subset of sequential games consists of
games of perfect information. A game is one of perfect
information if all players know the moves previously
made by all other players. Thus, only sequential games
can be games of perfect information, since in
simultaneous games not every player knows the actions
of the others. Most games studied in game theory are
imperfect-information games, although there are some
A game of imperfect information (the dotted line represents
interesting examples of perfect-information games, ignorance on the part of player 2, formally called an information set)
including the ultimatum game and centipede game.
Recreational games of perfect information games include chess, go, and mancala. Many card games are games of
imperfect information, for instance poker or contract bridge.

Perfect information is often confused with complete information, which is a similar concept. Complete information
requires that every player know the strategies and payoffs available to the other players but not necessarily the
actions taken. Games of incomplete information can be reduced, however, to games of imperfect information by
introducing "moves by nature" (Leyton-Brown & Shoham 2008, p. 60).

Combinatorial games
Games in which the difficulty of finding an optimal strategy stems from the multiplicity of possible moves are called
combinatorial games. Examples include chess and go. Games that involve imperfect or incomplete information may
also have a strong combinatorial character, for instance backgammon. There is no unified theory addressing
combinatorial elements in games. There are, however, mathematical tools that can solve particular problems and
answer some general questions.[31]
Games of perfect information have been studied in combinatorial game theory, which has developed novel
representations, e.g. surreal numbers, as well as combinatorial and algebraic (and sometimes non-constructive) proof
methods to solve games of certain types, including some "loopy" games that may result in infinitely long sequences
of moves. These methods address games with higher combinatorial complexity than those usually considered in
traditional (or "economic") game theory.[32] [33] A typical game that has been solved this way is hex. A related field
of study, drawing from computational complexity theory, is game complexity, which is concerned with estimating
the computational difficulty of finding optimal strategies.[34]
Research in artificial intelligence has addressed both perfect and imperfect (or incomplete) information games that
have very complex combinatorial structures (like chess, go, or backgammon) for which no provable optimal
strategies have been found. The practical solutions involve computational heuristics, like alpha-beta pruning or use
of artificial neural networks trained by reinforcement learning, which make games more tractable in computing
practice.[31] [35]
Game theory 12

Infinitely long games


Games, as studied by economists and real-world game players, are generally finished in finitely many moves. Pure
mathematicians are not so constrained, and set theorists in particular study games that last for infinitely many moves,
with the winner (or other payoff) not known until after all those moves are completed.
The focus of attention is usually not so much on what is the best way to play such a game, but simply on whether one
or the other player has a winning strategy. (It can be proven, using the axiom of choice, that there are games—even
with perfect information, and where the only outcomes are "win" or "lose"—for which neither player has a winning
strategy.) The existence of such strategies, for cleverly designed games, has important consequences in descriptive
set theory.

Discrete and continuous games


Much of game theory is concerned with finite, discrete games, that have a finite number of players, moves, events,
outcomes, etc. Many concepts can be extended, however. Continuous games allow players to choose a strategy from
a continuous strategy set. For instance, Cournot competition is typically modeled with players' strategies being any
non-negative quantities, including fractional quantities.
Differential games such as the continuous pursuit and evasion game are continuous games.

Many-player and population games


Games with an arbitrary, but finite, number of players are often called n-person games (Luce & Raiffa 1957).
Evolutionary game theory considers games involving a population of decision makers, where the frequency with
which a particular decision is made can change over time in response to the decisions made by all individuals in the
population. In biology, this is intended to model (biological) evolution, where genetically programmed organisms
pass along some of their strategy programming to their offspring. In economics, the same theory is intended to
capture population changes because people play the game many times within their lifetime, and consciously (and
perhaps rationally) switch strategies (Webb 2007).

Stochastic outcomes (and relation to other fields)


Individual decision problems with stochastic outcomes are sometimes considered "one-player games". These
situations are not considered game theoretical by some authors. They may be modeled using similar tools within the
related disciplines of decision theory, operations research, and areas of artificial intelligence, particularly AI
planning (with uncertainty) and multi-agent system. Although these fields may have different motivators, the
mathematics involved are substantially the same, e.g. using Markov decision processes (MDP).
Stochastic outcomes can also be modeled in terms of game theory by adding a randomly acting player who makes
"chance moves", also known as "moves by nature" (Osborne & Rubinstein 1994). This player is not typically
considered a third player in what is otherwise a two-player game, but merely serves to provide a roll of the dice
where required by the game.
For some problems, different approaches to modeling stochastic outcomes may lead to different solutions. For
example, the difference in approach between MDPs and the minimax solution is that the latter considers the
worst-case over a set of adversarial moves, rather than reasoning in expectation about these moves given a fixed
probability distribution. The minimax approach may be advantageous where stochastic models of uncertainty are not
available, but may also be overestimating extremely unlikely (but costly) events, dramatically swaying the strategy
in such scenarios if it is assumed that an adversary can force such an event to happen.[36] (See black swan theory for
more discussion on this kind of modeling issue, particularly as it relates to predicting and limiting losses in
investment banking.)
Game theory 13

General models that include all elements of stochastic outcomes, adversaries, and partial or noisy observability (of
moves by other players) have also been studied. The "gold standard" is considered to be partially observable
stochastic game (POSG), but few realistic problems are computationally feasible in POSG representation.[36]

Metagames
These are games the play of which is the development of the rules for another game, the target or subject game.
Metagames seek to maximize the utility value of the rule set developed. The theory of metagames is related to
mechanism design theory.
The term metagame analysis is also used to refer to a practical approach developed by Nigel Howard (Howard 1971)
whereby a situation is framed as a strategic game in which stakeholders try to realise their objectives by means of the
options available to them. Subsequent developments have led to the formulation of drama theory.

Notes
[1] Roger B. Myerson (1991). Game Theory: Analysis of Conflict, Harvard University Press, p. 1 (http:/ / books. google. com/
books?id=E8WQFRCsNr0C& printsec=find& pg=PA1=onepage& q& f=false#v=onepage& q& f=false). Chapter-preview links, pp. vii-xi
(http:/ / books. google. com/ books?id=E8WQFRCsNr0C& printsec=find& pg=PR7=onepage& q& f=false#v=onepage& q& f=false).
[2] R. J. Aumann ([1987] 2008). "game theory," Introduction, The New Palgrave Dictionary of Economics, 2nd Edition. Abstract. (http:/ / www.
dictionaryofeconomics. com/ article?id=pde2008_G000007& q=game theory& topicid=& result_number=3)
[3] James Madison, Vices of the Political System of the United States, April, 1787. Link (http:/ / www. constitution. org/ jm/ 17870400_vices.
htm)
[4] Jack Rakove, "James Madison and the Constitution", History Now, Issue 13 September 2007. Link (http:/ / www. historynow. org/ 09_2007/
historian2. html)
[5] J. v. Neumann (1928). "Zur Theorie der Gesellschaftsspiele," Mathematische Annalen, 100(1), p p. 295 (http:/ / www. springerlink. com/
content/ q07530916862223p/ )-320. English translation: "On the Theory of Games of Strategy," in A. W. Tucker and R. D. Luce, ed.
(1959),Contributions to the Theory of Games, v. 4, p p. 13 (http:/ / books. google. com/ books?hl=en& lr=& id=9lSVFzsTGWsC& oi=fnd&
pg=PA13& dq==P_RGaKOVtC& sig=J-QB_GglFSVWw9KfXjut62E6AmM#v=onepage& q& f=false)- 42. (http:/ / books. google. com/
books?hl=en& lr=& id=9lSVFzsTGWsC& oi=fnd& pg=PA42& dq==P_RGaKOVtC&
sig=J-QB_GglFSVWw9KfXjut62E6AmM#v=onepage& q& f=false)
[6] Leonard, Robert. Von Neumann, Morgenstern, and the Creation of Game Theory. Cambridge University Press, 2010
[7] Although common knowledge was first discussed by the philosopher David Lewis in his dissertation (and later book) Convention in the late
1960s, it was not widely considered by economists until Robert Aumann's work in the 1970s.
[8] Colin F. Camerer (2003). Behavioral Game Theory: Experiments in Strategic Interaction, pp. 5-7 (scroll to at 1.1 What Is Game Theory
Good For? (http:/ / press. princeton. edu/ chapters/ i7517. html)).
[9] Ross, Don. "Game Theory" (http:/ / plato. stanford. edu/ archives/ spr2008/ entries/ game-theory/ ). The Stanford Encyclopedia of Philosophy
(Spring 2008 Edition). Edward N. Zalta (ed.). . Retrieved 2008-08-21.
[10] Experimental work in game theory goes by many names, experimental economics, behavioral economics, and behavioural game theory are
several. For a recent discussion, see Colin F. Camerer (2003). Behavioral Game Theory: Experiments in Strategic Interaction ( description
(http:/ / press. princeton. edu/ titles/ 7517. html) and Introduction (http:/ / press. princeton. edu/ chapters/ i7517. html), pp. 1–25).
[11] • At JEL:C7 of the Journal of Economic Literature classification codes.
   • R.J. Aumann (2008). "game theory," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract. (http:/ / www.
dictionaryofeconomics. com/ article?id=pde2008_G000007& edition=current& q=game theory& topicid=& result_number=4)
   • Martin Shubik (1981). "Game Theory Models and Methods in Political Economy," in Kenneth Arrow and Michael Intriligator, ed.,
Handbook of Mathematical Economics, , v. 1, pp. 285 (http:/ / www. sciencedirect. com/ science?_ob=ArticleURL&
_udi=B7P5Y-4FDF0FN-C& _user=10& _coverDate=01/ 01/ 1981& _rdoc=11& _fmt=high& _orig=browse& _origin=browse&
_zone=rslt_list_item& _srch=doc-info(#toc#24615#1981#999989999#565707#FLP#display#Volume)& _cdi=24615& _sort=d&
_docanchor=& _ct=14& _acct=C000050221& _version=1& _urlVersion=0& _userid=10& md5=cb34198ec88c9ab8fa59af6d5634e9cf&
searchtype=a)-330.
   • Carl Shapiro (1989). "The Theory of Business Strategy," RAND Journal of Economics, 20(1), pp. 125 (http:/ / www. jstor. org/ pss/
2555656)-137.
[12] • Leigh Tesfatsion (2006). "Agent-Based Computational Economics: A Constructive Approach to Economic Theory," ch. 16, Handbook of
Computational Economics, v. 2, pp. 831 (http:/ / www. sciencedirect. com/ science/ article/ pii/ S1574002105020162)-880.
   • Joseph Y. Halpern (2008). "computer science and game theory," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract (http:/
/ www. dictionaryofeconomics. com/ article?id=pde2008_C000566& edition=current& q=& topicid=& result_number=1).
[13] • From The New Palgrave Dictionary of Economics (2008), 2nd Edition:
Roger B. Myerson. "mechanism design." Abstract (http:/ / www. dictionaryofeconomics. com/ article?id=pde2008_M000132&
Game theory 14

edition=current& q=mechanism design& topicid=& result_number=3).


     _____. "revelation principle." Abstract (http:/ / www. dictionaryofeconomics. com/ article?id=pde2008_R000137& edition=current&
q=moral& topicid=& result_number=1).
   • Tuomas Sandholm. "computing in mechanism design." Abstract. (http:/ / www. dictionaryofeconomics. com/
article?id=pde2008_C000563& edition=& field=keyword& q=algorithmic mechanism design& topicid=& result_number=1)
   • Noam Nisan and Amir Ronen (2001). "Algorithmic Mechanism Design," Games and Economic Behavior, 35(1-2), pp. 166–196 (http:/ /
www. cs. cmu. edu/ ~sandholm/ cs15-892F09/ Algorithmic mechanism design. pdf).
   • Noam Nisan et al., ed. (2007). Algorithmic Game Theory, Cambridge University Press. Description (http:/ / www. cup. cam. ac. uk/ asia/
catalogue/ catalogue. asp?isbn=9780521872829).
[14] R. Aumann and S. Hart, ed., 1994. Handbook of Game Theory with Economic Applications, v. 2, outline links, ch. 30: "Voting Procedures"
(http:/ / www. sciencedirect. com/ science/ article/ pii/ S1574000505800621) & ch. 31: "Social Choice." (http:/ / www. sciencedirect. com/
science/ article/ pii/ S1574000505800633)
[15] • Vernon L. Smith, 1992. "Game Theory and Experimental Economics: Beginnings and Early Influences," in E. R. Weintraub, ed., Towards
a History of Game Theory, pp. 241- (http:/ / books. google. com/ books?hl=en& lr=& id=9CHY2Gozh1MC& oi=fnd& pg=PA241&
ots=onepage& q& f=false#v=onepage& q& f=false*) 282.
   • _____, 2001. "Experimental Economics," International Encyclopedia of the Social & Behavioral Sciences, pp. 5100-5108. Abstract (http:/
/ www. sciencedirect. com/ science/ article/ pii/ B0080430767022324) per sect. 1.1 & 2.1.
   • Charles R. Plott and Vernon L. Smith, ed., 2008. Handbook of Experimental Economics Results, v. 1, Elsevier, Part 4, Games, ch. 45-66
preview links (http:/ / www. sciencedirect. com/ science?_ob=PublicationURL& _hubEid=1-s2. 0-S1574072207X00015& _cid=277334&
_pubType=HS& _auth=y& _acct=C000228598& _version=1& _urlVersion=0& _userid=10& md5=49f8b6d5e3024eac39ed5fad351fe568).
   • Vincent P. Crawford (1997). "Theory and Experiment in the Analysis of Strategic Interaction," in Advances in Economics and
Econometrics: Theory and Applications, pp. 206-242 (http:/ / weber. ucsd. edu/ ~vcrawfor/ CrawfordThExp97. pdf). Cambridge. Reprinted in
Colin F. Camerer et al., ed. (2003). Advances in Behavioral Economics, Princeton. 1986-2003 papers. Description (http:/ / press. princeton.
edu/ titles/ 8437. html), contents (http:/ / books. google. com/ books?id=sA4jJOjwCW4C& printsec=find& pg=PR7=#v=onepage& q&
f=false), and ., Princeton, ch. 12.
   • Martin Shubik, 2002. "Game Theory and Experimental Gaming," in R. Aumann and S. Hart, ed., Handbook of Game Theory with
Economic Applications, Elsevier, v. 3, pp. 2327-2351. Abstract (http:/ / www. sciencedirect. com/ science/ article/ pii/ S1574000502030254).
[16] From The New Palgrave Dictionary of Economics (2008), 2nd Edition:
   • Faruk Gul. "behavioural economics and game theory." Abstract. (http:/ / www. dictionaryofeconomics. com/
article?id=pde2008_G000210& q=Behavioral economics & topicid=& result_number=2)
   • Colin F. Camerer. "behavioral game theory." Abstract. (http:/ / www. dictionaryofeconomics. com/ article?id=pde2008_B000302&
q=Behavioral economics & topicid=& result_number=13)
   • _____ (1997). "Progress in Behavioral Game Theory," Journal of Economic Perspectives, 11(4), p. 172 [pp. 167-188 (http:/ / authors.
library. caltech. edu/ 22122/ 1/ 2138470[1]. pdf)].
   • _____ (2003). Behavioral Game Theory, Princeton. Description (http:/ / press. princeton. edu/ chapters/ i7517. html), preview (http:/ /
books. google. com/ books?id=cr_Xg7cRvdcC& printsec=find& pg=PR7=#v=onepage& q& f=false) ([ctrl]+), and ch. 1 link (http:/ / press.
princeton. edu/ chapters/ i7517. pdf).
   • _____, George Loewenstein, and Matthew Rabin, ed. (2003). Advances in Behavioral Economics, Princeton. 1986-2003 papers.
Description (http:/ / press. princeton. edu/ titles/ 8437. html), contents (http:/ / books. google. com/ books?id=sA4jJOjwCW4C&
printsec=find& pg=PR7=#v=onepage& q& f=false), and .
   • Drew Fudenberg (2006). "Advancing Beyond Advances in Behavioral Economics," Journal of Economic Literature, 44(3), pp. 694 (http:/ /
www. jstor. org/ pss/ 30032349)-711.
[17] • Eric Rasmusen (2007). Games and Information, 4th ed. Description (http:/ / www. wiley. com/ WileyCDA/ WileyTitle/
productCd-EHEP001009. html) and chapter-preview links. (http:/ / books. google. com/ books?id=5XEMuJwnBmUC& printsec=fnd&
pg=PR5& dq=gbs_atb#v=onepage& q& f=false)
   • David M. Kreps (1990). Game Theory and Economic Modelling. Description. (http:/ / econpapers. repec. org/ bookchap/ oxpobooks/
9780198283812. htm)
   • R. Aumann and S. Hart, ed. (1992, 2002). Handbook of Game Theory with Economic Applications v. 1, scroll to outline links, ch. 3-6
(http:/ / www. sciencedirect. com/ science/ handbooks/ 15740005/ 1) and v. 3, ch. 43 (http:/ / www. sciencedirect. com/ science/ article/ pii/
S1574000502030060).
[18] • Jean Tirole (1988). The Theory of Industrial Organization, MIT Press. Description (http:/ / mitpress. mit. edu/ catalog/ item/ default.
asp?ttype=2& tid=8224) and chapter-preview links, pp. vii-ix (http:/ / books. google. com/ books?id=HIjsF0XONF8C& printsec=find&
pg=PR7=onepage& q& f=false#v=onepage& q& f=false), "General Organization," pp. 5-6 (http:/ / books. google. com/
books?id=HIjsF0XONF8C& dq=find& pg=PA5& source=bn& hl=en& ei=e2vES-H-O8T68Abxp_GyDw& sa=X& oi=book_result&
ct=result& resnum=4& ved=0CB8Q6AEwAw#v=onepage& q& f=false), and "Non-Cooperative Game Theory: A User's Guide Manual,' " ch.
11, pp. 423-59 (http:/ / books. google. com/ books?id=HIjsF0XONF8C& dq=find& pg=PA423& source=bn& hl=en&
ei=e2vES-H-O8T68Abxp_GyDw& sa=X& oi=book_result& ct=result& resnum=4& ved=0CB8Q6AEwAw#v=onepage& q& f=false).
   • Kyle Bagwell and Asher Wolinsky (2002). "Game theory and Industrial Organization," ch. 49, Handbook of Game Theory with Economic
Applications, v. 3, pp. 1851 (http:/ / www. sciencedirect. com/ science/ article/ pii/ S1574000502030126)-1895.
Game theory 15

   • Martin Shubik (1959). Strategy and Market Structure: Competition, Oligopoly, and the Theory of Games, Wiley. Description (http:/ /
devirevues. demo. inist. fr/ handle/ 2042/ 29380) and review extract (http:/ / www. jstor. org/ pss/ 40434883).
   • _____ with Richard Levitan (1980). Market Structure and Behavior, Harvard University Press. Review extract (http:/ / www. jstor. org/
pss/ 2232276).
[19] • Martin Shubik (1981). "Game Theory Models and Methods in Political Economy," in Handbook of Mathematical Economics, v. 1, pp. 285
(http:/ / www. sciencedirect. com/ science?_ob=ArticleURL& _udi=B7P5Y-4FDF0FN-C& _user=10& _coverDate=01/ 01/ 1981&
_rdoc=11& _fmt=high& _orig=browse& _origin=browse& _zone=rslt_list_item&
_srch=doc-info(#toc#24615#1981#999989999#565707#FLP#display#Volume)& _cdi=24615& _sort=d& _docanchor=& _ct=14&
_acct=C000050221& _version=1& _urlVersion=0& _userid=10& md5=cb34198ec88c9ab8fa59af6d5634e9cf& searchtype=a)-330.
   •_____ (1987). A Game-Theoretic Approach to Political Economy. MIT Press. Description (http:/ / mitpress. mit. edu/ catalog/ item/ default.
asp?tid=5086& ttype=2).
[20] • Martin Shubik (1978). "Game Theory: Economic Applications," in W. Kruskal and J.M. Tanur, ed., International Encyclopedia of
Statistics, v. 2, pp. 372-78.
   • Robert Aumann and Sergiu Hart, ed. Handbook of Game Theory with Economic Applications (scrollable to chapter-outline or abstract
links):

1992. v. 1 (http:/ / www. sciencedirect. com/ science?_ob=PublicationURL&


_tockey=#TOC#24608#1992#999989999#576731#FLP#& _cdi=24608& _pubType=HS& view=c&
_auth=y& _prev=y& _acct=C000228598& _version=1& _urlVersion=0& _userid=10&
md5=ec23eb49a19772fcb6419d3088e7e45a); 1994. v. 2 (http:/ / www. sciencedirect. com/
science?_ob=PublicationURL& _tockey=#TOC#24608#1994#999979999#576730#FLP#& _cdi=24608&
_pubType=HS& view=c& _auth=y& _acct=C000228598& _version=1& _urlVersion=0& _userid=10&
md5=ea98d6d6854eb852502db96104255fac); 2002. v. 3. (http:/ / www. sciencedirect. com/
science?_ob=PublicationURL& _tockey=#TOC#24608#2002#999969999#565225#FLP#& _cdi=24608&
_pubType=HS& view=c& _auth=y& _next=y& _acct=C000228598& _version=1& _urlVersion=0&
_userid=10&md5=6c4315f84e872b8ef135f8195eb8b4ab)
[21] Evolutionary Game Theory (Stanford Encyclopedia of Philosophy) (http:/ / plato. stanford. edu/ entries/ game-evolutionary/ )
[22] Biological Altruism (Stanford Encyclopedia of Philosophy) (http:/ / www. seop. leeds. ac. uk/ entries/ altruism-biological/ )
[23] Noam Nisan et al., ed. (2007). Algorithmic Game Theory, Cambridge University Press. Description (http:/ / www. cup. cam. ac. uk/ asia/
catalogue/ catalogue. asp?isbn=9780521872829).
[24] Noam Nisan and Amir Ronen (2001). "Algorithmic Mechanism Design," Games and Economic Behavior, 35(1-2), pp. 166–196 (http:/ /
www. cs. cmu. edu/ ~sandholm/ cs15-892F09/ Algorithmic mechanism design. pdf).
[25] • Joseph Y. Halpern (2008). "computer science and game theory," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract (http:/
/ www. dictionaryofeconomics. com/ article?id=pde2008_C000566& edition=current& q=& topicid=& result_number=1).
   • Yoav Shoham (2008). "Computer Science and Game Theory," Communications of the ACM, 51(8), pp. 75-79 (http:/ / www. robotics.
stanford. edu/ ~shoham/ www papers/ CSGT-CACM-Shoham. pdf).
   • Amy Greenwald and Michael L. Littman (2007). "Introduction to the Special Issue on Learning and Computational Game Theory,"
Machine Learning, 67(1-2), pp. 3-6. Preview and issue-article links (http:/ / www. springerlink. com/ content/ b6232u7525640881/ ).
[26] E. Ullmann Margalit, The Emergence of Norms, Oxford University Press, 1977. C. Bicchieri, The Grammar of Society: the Nature and
Dynamics of Social Norms, Cambridge University Press, 2006.
[27] "Self-Refuting Theories of Strategic Interaction: A Paradox of Common Knowledge ", Erkenntnis 30, 1989: 69-85. See also Rationality and
Coordination, Cambridge University Press, 1993.
[28] The Dynamics of Rational Deliberation, Harvard University Press, 1990.
[29] "Knowledge, Belief, and Counterfactual Reasoning in Games." In Cristina Bicchieri, Richard Jeffrey, and Brian Skyrms, eds., The Logic of
Strategy. New York: Oxford University Press, 1999.
[30] For a more detailed discussion of the use of Game Theory in ethics see the Stanford Encyclopedia of Philosophy's entry game theory and
ethics (http:/ / plato. stanford. edu/ entries/ game-ethics/ ).
[31] Jörg Bewersdorff (2005). Luck, logic, and white lies: the mathematics of games. A K Peters, Ltd.. pp. ix-xii and chapter 31.
ISBN 9781568812106.
[32] Albert, Michael H.; Nowakowski, Richard J.; Wolfe, David (2007). Lessons in Play: In Introduction to Combinatorial Game Theory. A K
Peters Ltd. pp. 3–4. ISBN 978-1-56881-277-9.
[33] Beck, József (2008). Combinatorial games: tic-tac-toe theory. Cambridge University Press. pp. 1–3. ISBN 9780521461009.
[34] Robert A. Hearn; Erik D. Demaine (2009). Games, Puzzles, and Computation. A K Peters, Ltd.. ISBN 9781568813226.
[35] M. Tim Jones (2008). Artificial Intelligence: A Systems Approach. Jones & Bartlett Learning. pp. 106–118. ISBN 9780763773373.
[36] Hugh Brendan McMahan (2006), Robust Planning in Domains with Stochastic Outcomes, Adversaries, and Partial Observability (http:/ /
www. cs. cmu. edu/ ~mcmahan/ research/ mcmahan_thesis. pdf), CMU-CS-06-166, pp. 3-4
Game theory 16

References and further reading

Textbooks and general references


• Aumann, Robert J. (1987), game theory,, The New Palgrave: A Dictionary of Economics, 2, pp. 460–82.
• Aumann, Robert, and Sergiu Hart, ed. Handbook of Game Theory with Economic Applications scrollable to
chapter-outline or abstract links:
1992. v. 1 (http:/ / www. sciencedirect. com/ science?_ob=PublicationURL&
_tockey=#TOC#24608#1992#999989999#576731#FLP#& _cdi=24608& _pubType=HS& view=c&
_auth=y& _prev=y& _acct=C000228598& _version=1& _urlVersion=0& _userid=10&
md5=ec23eb49a19772fcb6419d3088e7e45a)
1994. v. 2 (http:/ / www. sciencedirect. com/ science?_ob=PublicationURL&
_tockey=#TOC#24608#1994#999979999#576730#FLP#& _cdi=24608& _pubType=HS& view=c&
_auth=y& _acct=C000228598& _version=1& _urlVersion=0& _userid=10&
md5=ea98d6d6854eb852502db96104255fac)
2002. v. 3. (http:/ / www. sciencedirect. com/ science?_ob=PublicationURL&
_tockey=#TOC#24608#2002#999969999#565225#FLP#& _cdi=24608& _pubType=HS& view=c&
_auth=y& _next=y& _acct=C000228598& _version=1& _urlVersion=0& _userid=10&
md5=6c4315f84e872b8ef135f8195eb8b4ab)
• The New Palgrave Dictionary of Economics (2008). 2nd Edition:
"game theory" by Robert J. Aumann. Abstract. (http:/ / www. dictionaryofeconomics. com/
article?id=pde2008_G000007&q=game theory&topicid=&result_number=3)
"game theory in economics, origins of," by Robert Leonard. Abstract. (http:/ / www. dictionaryofeconomics.
com/article?id=pde2008_G000193&goto=a&topicid=B2&result_number=10)
"behavioural economics and game theory" by Faruk Gul. Abstract. (http:/ / www. dictionaryofeconomics.
com/article?id=pde2008_G000210&q=Behavioral economics &topicid=&result_number=2)
• Camerer, Colin (2003), Behavioral Game Theory: Experiments in Strategic Interaction, Russell Sage Foundation,
ISBN 978-0-691-09039-9 Description (http://press.princeton.edu/titles/7517.html) and Introduction (http://
press.princeton.edu/chapters/i7517.html), pp. 1–25.
• Dutta, Prajit K. (1999), Strategies and games: theory and practice, MIT Press, ISBN 978-0-262-04169-0.
Suitable for undergraduate and business students.
• Fernandez, L F.; Bierman, H S. (1998), Game theory with economic applications, Addison-Wesley,
ISBN 978-0-201-84758-1. Suitable for upper-level undergraduates.
• Fudenberg, Drew; Tirole, Jean (1991), Game theory, MIT Press, ISBN 978-0-262-06141-4. Acclaimed reference
text. Description. (http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=8204)
• Gibbons, Robert D. (1992), Game theory for applied economists, Princeton University Press,
ISBN 978-0-691-00395-5. Suitable for advanced undergraduates.
• Published in Europe as Robert Gibbons (2001), A Primer in Game Theory, London: Harvester Wheatsheaf,
ISBN 978-0-7450-1159-2.
• Gintis, Herbert (2000), Game theory evolving: a problem-centered introduction to modeling strategic behavior,
Princeton University Press, ISBN 978-0-691-00943-8
• Green, Jerry R.; Mas-Colell, Andreu; Whinston, Michael D. (1995), Microeconomic theory, Oxford University
Press, ISBN 978-0-19-507340-9. Presents game theory in formal way suitable for graduate level.
• edited by Vincent F. Hendricks, Pelle G. Hansen. (2007), Hansen, Pelle G.; Hendricks, Vincent F., eds., Game
Theory: 5 Questions, New York, London: Automatic Press / VIP, ISBN 9788799101344. Snippets from
interviews (http://www.gametheorists.com).
Game theory 17

• Howard, Nigel (1971), Paradoxes of Rationality: Games, Metagames, and Political Behavior, Cambridge,
Massachusetts: The MIT Press, ISBN 978-0262582377
• Isaacs, Rufus (1999), Differential Games: A Mathematical Theory With Applications to Warfare and Pursuit,
Control and Optimization, New York: Dover Publications, ISBN 978-0-486-40682-4
• Leyton-Brown, Kevin; Shoham, Yoav (2008), Essentials of Game Theory: A Concise, Multidisciplinary
Introduction (http://www.gtessentials.org), San Rafael, CA: Morgan & Claypool Publishers,
ISBN 978-1-598-29593-1. An 88-page mathematical introduction; free online (http://www.morganclaypool.
com/doi/abs/10.2200/S00108ED1V01Y200802AIM003) at many universities.
• Miller, James H. (2003), Game theory at work: how to use game theory to outthink and outmaneuver your
competition, New York: McGraw-Hill, ISBN 978-0-07-140020-6. Suitable for a general audience.
• Myerson, Roger B. (1991), Game theory: analysis of conflict, Harvard University Press,
ISBN 978-0-674-34116-6
• Osborne, Martin J. (2004), An introduction to game theory, Oxford University Press, ISBN 978-0-19-512895-6.
Undergraduate textbook.
• Papayoanou, Paul (2010), Game Theory for Business, Probabilistic Publishing, ISBN 978-09647938-7-3. Primer
for business men and women.
• Osborne, Martin J.; Rubinstein, Ariel (1994), A course in game theory, MIT Press, ISBN 978-0-262-65040-3. A
modern introduction at the graduate level.
• Poundstone, William (1992), Prisoner's Dilemma: John von Neumann, Game Theory and the Puzzle of the Bomb,
Anchor, ISBN 978-0-385-41580-4. A general history of game theory and game theoreticians.
• Rasmusen, Eric (2006), Games and Information: An Introduction to Game Theory (http://www.rasmusen.org/
GI/index.html) (4th ed.), Wiley-Blackwell, ISBN 978-1-4051-3666-2
• Shoham, Yoav; Leyton-Brown, Kevin (2009), Multiagent Systems: Algorithmic, Game-Theoretic, and Logical
Foundations (http://www.masfoundations.org), New York: Cambridge University Press,
ISBN 978-0-521-89943-7. A comprehensive reference from a computational perspective; downloadable free
online (http://www.masfoundations.org/download.html).
• Williams, John Davis (1954) (PDF), The Compleat Strategyst: Being a Primer on the Theory of Games of
Strategy (http://www.rand.org/pubs/commercial_books/2007/RAND_CB113-1.pdf), Santa Monica: RAND
Corp., ISBN 9780833042224 Praised primer and popular introduction for everybody, never out of print.
• Roger McCain's Game Theory: A Nontechnical Introduction to the Analysis of Strategy (http://faculty.lebow.
drexel.edu/McCainR//top/eco/game/game.html) (Revised Edition)
• Christopher Griffin (2010) Game Theory: Penn State Math 486 Lecture Notes (http://www.personal.psu.edu/
cxg286/Math486.pdf), pp. 169, CC-BY-NC-SA license, suitable introduction for undergraduates
• Webb, James N. (2007), Game theory: decisions, interaction and evolution, Springer undergraduate mathematics
series, Springer, ISBN 1846284236 Consistent treatment of game types usually claimed by different applied
fields, e.g. Markov decision processes.
• Joseph E. Harrington (2008) Games, strategies, and decision making, Worth, ISBN 0716766302. Textbook
suitable for undergraduates in applied fields; numerous examples, fewer formalisms in concept presentation.
Game theory 18

Historically important texts


• Aumann, R.J. and Shapley, L.S. (1974), Values of Non-Atomic Games, Princeton University Press
• Cournot, A. Augustin (1838), "Recherches sur les principles mathematiques de la théorie des richesses", Libraire
des sciences politiques et sociales (Paris: M. Rivière & C.ie)
• Edgeworth, Francis Y. (1881), Mathematical Psychics, London: Kegan Paul
• Farquharson, Robin (1969), Theory of Voting, Blackwell (Yale U.P. in the U.S.), ISBN 0631124608
• Luce, R. Duncan; Raiffa, Howard (1957), Games and decisions: introduction and critical survey, New York:
Wiley
• reprinted edition: R. Duncan Luce ; Howard Raiffa (1989), Games and decisions: introduction and critical
survey, New York: Dover Publications, ISBN 978-0-486-65943-5
• Maynard Smith, John (1982), Evolution and the theory of games, Cambridge University Press,
ISBN 978-0-521-28884-2
• Maynard Smith, John; Price, George R. (1973), "The logic of animal conflict", Nature 246 (5427): 15–18,
Bibcode 1973Natur.246...15S, doi:10.1038/246015a0
• Nash, John (1950), "Equilibrium points in n-person games" (http://www.pnas.org/cgi/search?sendit=Search&
pubdate_year=&volume=&firstpage=&DOI=&author1=nash&author2=&title=equilibrium&
andorexacttitle=and&titleabstract=&andorexacttitleabs=and&fulltext=&andorexactfulltext=and&
fmonth=Jan&fyear=1915&tmonth=Feb&tyear=2008&fdatedef=15+January+1915&tdatedef=6+February+
2008&tocsectionid=all&RESULTFORMAT=1&hits=10&hitsbrief=25&sortspec=relevance&
sortspecbrief=relevance), Proceedings of the National Academy of Sciences of the United States of America 36
(1): 48–49, doi:10.1073/pnas.36.1.48, PMC 1063129, PMID 16588946
• Shapley, L. S. (1953), A Value for n-person Games, In: Contributions to the Theory of Games volume II, H. W.
Kuhn and A. W. Tucker (eds.)
• Shapley, L. S. (1953), Stochastic Games, Proceedings of National Academy of Science Vol. 39, pp. 1095–1100.
• von Neumann, John (1928), "Zur Theorie der Gesellschaftsspiele", Mathematische Annalen 100 (1): p. 295 (http:/
/www.springerlink.com/content/q07530916862223p/)–320. English translation: "On the Theory of Games of
Strategy," in A. W. Tucker and R. D. Luce, ed. (1959), Contributions to the Theory of Games, v. 4, p p. 13 (http:/
/books.google.com/books?hl=en&lr=&id=9lSVFzsTGWsC&oi=fnd&pg=PA13&dq==P_RGaKOVtC&
sig=J-QB_GglFSVWw9KfXjut62E6AmM#v=onepage&q&f=false)- 42. (http://books.google.com/
books?hl=en&lr=&id=9lSVFzsTGWsC&oi=fnd&pg=PA42&dq==P_RGaKOVtC&
sig=J-QB_GglFSVWw9KfXjut62E6AmM#v=onepage&q&f=false) Princeton University Press.
• von Neumann, John; Morgenstern, Oskar (1944), Theory of games and economic behavior, Princeton University
Press
• Zermelo, Ernst (1913), "Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels", Proceedings
of the Fifth International Congress of Mathematicians 2: 501–4

Other print references


• Ben David, S.; Borodin, Allan; Karp, Richard; Tardos, G.; Wigderson, A. (1994), "On the Power of
Randomization in On-line Algorithms" (http://www.math.ias.edu/~avi/PUBLICATIONS/MYPAPERS/
BORODIN/paper.pdf) (PDF), Algorithmica 11 (1): 2–14, doi:10.1007/BF01294260
• Bicchieri, Cristina (1993, 2nd. edition, 1997), Rationality and Coordination, Cambridge University Press,
ISBN 0-521-57444-7
• Downs, Anthony (1957), An Economic theory of Democracy, New York: Harper
• Gauthier, David (1986), Morals by agreement, Oxford University Press, ISBN 978-0-19-824992-4
• Allan Gibbard, "Manipulation of voting schemes: a general result", Econometrica, Vol. 41, No. 4 (1973),
pp. 587–601.
Game theory 19

• Grim, Patrick; Kokalis, Trina; Alai-Tafti, Ali; Kilb, Nicholas; St Denis, Paul (2004), "Making meaning happen",
Journal of Experimental & Theoretical Artificial Intelligence 16 (4): 209–243,
doi:10.1080/09528130412331294715
• Harper, David; Maynard Smith, John (2003), Animal signals, Oxford University Press, ISBN 978-0-19-852685-8
• Harsanyi, John C. (1974), "An equilibrium point interpretation of stable sets", Management Science 20 (11):
1472–1495, doi:10.1287/mnsc.20.11.1472
• Levy, Gilat; Razin, Ronny (2003), "It Takes Two: An Explanation of the Democratic Peace" (http://papers.ssrn.
com/sol3/papers.cfm?abstract_id=433844), Working Paper
• Lewis, David (1969), Convention: A Philosophical Study, ISBN 978-0-631-23257-5 (2002 edition)
• McDonald, John (1950 - 1996), Strategy in Poker, Business & War, W. W. Norton, ISBN 0-393-31457-X. A
layman's introduction.
• Quine, W.v.O (1967), "Truth by Convention", Philosophica Essays for A.N. Whitehead, Russel and Russel
Publishers, ISBN 978-0-8462-0970-6
• Quine, W.v.O (1960), "Carnap and Logical Truth", Synthese 12 (4): 350–374, doi:10.1007/BF00485423
• Mark A. Satterthwaite, "Strategy-proofness and Arrow's Conditions: Existence and Correspondence Theorems for
Voting Procedures and Social Welfare Functions", Journal of Economic Theory 10 (April 1975), 187–217.
• Siegfried, Tom (2006), A Beautiful Math, Joseph Henry Press, ISBN 0-309-10192-1
• Skyrms, Brian (1990), The Dynamics of Rational Deliberation, Harvard University Press, ISBN 0-674-21885-X
• Skyrms, Brian (1996), Evolution of the social contract, Cambridge University Press, ISBN 978-0-521-55583-8
• Skyrms, Brian (2004), The stag hunt and the evolution of social structure, Cambridge University Press,
ISBN 978-0-521-53392-8
• Sober, Elliott; Wilson, David Sloan (1998), Unto others: the evolution and psychology of unselfish behavior,
Harvard University Press, ISBN 978-0-674-93047-6
• Thrall, Robert M.; Lucas, William F. (1963), " -person games in partition function form", Naval Research
Logistics Quarterly 10 (4): 281–298, doi:10.1002/nav.3800100126

Websites
• Paul Walker: History of Game Theory Page (http://www.econ.canterbury.ac.nz/personal_pages/paul_walker/
gt/hist.htm).
• David Levine: Game Theory. Papers, Lecture Notes and much more stuff. (http://dklevine.com)
• Alvin Roth: Game Theory and Experimental Economics page (http://www.economics.harvard.edu/~aroth/
alroth.html) - Comprehensive list of links to game theory information on the Web
• Adam Kalai: Game Theory and Computer Science (http://wiki.cc.gatech.edu/theory/index.php/
CS_8803_-_Game_Theory_and_Computer_Science._Spring_2008) - Lecture notes on Game Theory and
Computer Science
• Mike Shor: Game Theory .net (http://www.gametheory.net) - Lecture notes, interactive illustrations and other
information.
• Jim Ratliff's Graduate Course in Game Theory (http://virtualperfection.com/gametheory/) (lecture notes).
• Don Ross: Review Of Game Theory (http://plato.stanford.edu/entries/game-theory/) in the Stanford
Encyclopedia of Philosophy.
• Bruno Verbeek and Christopher Morris: Game Theory and Ethics (http://plato.stanford.edu/entries/
game-ethics/)
• Elmer G. Wiens: Game Theory (http://www.egwald.ca/operationsresearch/gameintroduction.php) -
Introduction, worked examples, play online two-person zero-sum games.
• Marek M. Kaminski: Game Theory and Politics (http://webfiles.uci.edu/mkaminsk/www/courses.html) -
syllabuses and lecture notes for game theory and political science.
• Web sites on game theory and social interactions (http://www.socialcapitalgateway.org/eng-gametheory.htm)
Game theory 20

• Kesten Green's Conflict Forecasting (http://conflictforecasting.com) - See Papers for evidence on the accuracy
of forecasts from game theory and other methods.
• McKelvey, Richard D., McLennan, Andrew M., and Turocy, Theodore L. (2007) Gambit: Software Tools for
Game Theory (http://gambit.sourceforge.net).
• Benjamin Polak: Open Course on Game Theory at Yale (http://oyc.yale.edu/economics/game-theory) videos
of the course (http://www.youtube.com/view_play_list?p=6EF60E1027E1A10B)
• Benjamin Moritz, Bernhard Könsgen, Danny Bures, Ronni Wiersch, (2007) Spieltheorie-Software.de: An
application for Game Theory implemented in JAVA (http://www.spieltheorie-software.de).
Nash equilibrium 21

Nash equilibrium
Nash Equilibrium
A solution concept in game theory

Relationships

Subset of Rationalizability, Epsilon-equilibrium, Correlated equilibrium

Superset of Evolutionarily stable strategy, Subgame perfect equilibrium, Perfect Bayesian equilibrium, Trembling hand perfect
equilibrium, Stable Nash equilibrium, Strong Nash equilibrium

Significance

Proposed John Forbes Nash


by

Used for All non-cooperative games

Example Rock paper scissors

In game theory, Nash equilibrium (named after John Forbes Nash, who proposed it) is a solution concept of a game
involving two or more players, in which each player is assumed to know the equilibrium strategies of the other
players, and no player has anything to gain by changing only his own strategy unilaterally[1] :14. If each player has
chosen a strategy and no player can benefit by changing his or her strategy while the other players keep theirs
unchanged, then the current set of strategy choices and the corresponding payoffs constitute a Nash equilibrium.
Stated simply, Amy and Phil are in Nash equilibrium if Amy is making the best decision she can, taking into account
Phil's decision, and Phil is making the best decision he can, taking into account Amy's decision. Likewise, a group of
players are in Nash equilibrium if each one is making the best decision that he or she can, taking into account the
decisions of the others. However, Nash equilibrium does not necessarily mean the best payoff for all the players
involved; in many cases, all the players might improve their payoffs if they could somehow agree on strategies
different from the Nash equilibrium: e.g., competing businesses forming a cartel in order to increase their profits.

Applications
Game theorists use the Nash equilibrium concept to analyze the outcome of the strategic interaction of several
decision makers. In other words, it provides a way of predicting what will happen if several people or several
institutions are making decisions at the same time, and if the outcome depends on the decisions of the others. The
simple insight underlying John Nash's idea is that we cannot predict the result of the choices of multiple decision
makers if we analyze those decisions in isolation. Instead, we must ask what each player would do, taking into
account the decision-making of the others.
Nash equilibrium has been used to analyze hostile situations like war and arms races[2] (see Prisoner's dilemma), and
also how conflict may be mitigated by repeated interaction (see Tit-for-tat). It has also been used to study to what
extent people with different preferences can cooperate (see Battle of the sexes), and whether they will take risks to
achieve a cooperative outcome (see Stag hunt). It has been used to study the adoption of technical standards, and also
the occurrence of bank runs and currency crises (see Coordination game). Other applications include traffic flow (see
Wardrop's principle), how to organize auctions (see auction theory), the outcome of efforts exerted by multiple
parties in the education process,[3] and even penalty kicks in soccer (see Matching pennies).[4]
Nash equilibrium 22

History
A version of the Nash equilibrium concept was first used by Antoine Augustin Cournot in his theory of oligopoly
(1838). In Cournot's theory, firms choose how much output to produce to maximize their own profit. However, the
best output for one firm depends on the outputs of others. A Cournot equilibrium occurs when each firm's output
maximizes its profits given the output of the other firms, which is a pure strategy Nash Equilibrium.
The modern game-theoretic concept of Nash Equilibrium is instead defined in terms of mixed strategies, where
players choose a probability distribution over possible actions. The concept of the mixed strategy Nash Equilibrium
was introduced by John von Neumann and Oskar Morgenstern in their 1944 book The Theory of Games and
Economic Behavior. However, their analysis was restricted to the special case of zero-sum games. They showed that
a mixed-strategy Nash Equilibrium will exist for any zero-sum game with a finite set of actions. The contribution of
John Forbes Nash in his 1951 article Non-Cooperative Games was to define a mixed strategy Nash Equilibrium for
any game with a finite set of actions and prove that at least one (mixed strategy) Nash Equilibrium must exist in such
a game.
Since the development of the Nash equilibrium concept, game theorists have discovered that it makes misleading
predictions (or fails to make a unique prediction) in certain circumstances. Therefore they have proposed many
related solution concepts (also called 'refinements' of Nash equilibrium) designed to overcome perceived flaws in the
Nash concept. One particularly important issue is that some Nash equilibria may be based on threats that are not
'credible'. Therefore, in 1965 Reinhard Selten proposed subgame perfect equilibrium as a refinement that eliminates
equilibria which depend on non-credible threats. Other extensions of the Nash equilibrium concept have addressed
what happens if a game is repeated, or what happens if a game is played in the absence of perfect information.
However, subsequent refinements and extensions of the Nash equilibrium concept share the main insight on which
Nash's concept rests: all equilibrium concepts analyze what choices will be made when each player takes into
account the decision-making of others.

Definitions

Informal definition
Informally, a set of strategies is a Nash equilibrium if no player can do better by unilaterally changing his or her
strategy. To see what this means, imagine that each player is told the strategies of the others. Suppose then that each
player asks himself or herself: "Knowing the strategies of the other players, and treating the strategies of the other
players as set in stone, can I benefit by changing my strategy?"
If any player would answer "Yes", then that set of strategies is not a Nash equilibrium. But if every player prefers not
to switch (or is indifferent between switching and not) then the set of strategies is a Nash equilibrium. Thus, each
strategy in a Nash equilibrium is a best response to all other strategies in that equilibrium.[5]
The Nash equilibrium may sometimes appear non-rational in a third-person perspective. This is because it may
happen that a Nash equilibrium is not Pareto optimal.
The Nash equilibrium may also have non-rational consequences in sequential games because players may "threaten"
each other with non-rational moves. For such games the subgame perfect Nash equilibrium may be more meaningful
as a tool of analysis.
Nash equilibrium 23

Formal definition
Let (S, f) be a game with n players, where Si is the strategy set for player i, S=S1 X S2 ... X Sn is the set of strategy
profiles and f=(f1(x), ..., fn(x)) is the payoff function for x S. Let xi be a strategy profile of player i and x-i be a
strategy profile of all players except for player i. When each player i {1, ..., n} chooses strategy xi resulting in
strategy profile x = (x1, ..., xn) then player i obtains payoff fi(x). Note that the payoff depends on the strategy profile
chosen, i.e., on the strategy chosen by player i as well as the strategies chosen by all the other players. A strategy
profile x* S is a Nash equilibrium (NE) if no unilateral deviation in strategy by any single player is profitable for
that player, that is

A game can have either a pure-strategy or a mixed Nash Equilibrium, (in the latter a pure strategy is chosen
stochastically with a fixed frequency). Nash proved that if we allow mixed strategies, then every game with a finite
number of players in which each player can choose from finitely many pure strategies has at least one Nash
equilibrium.
When the inequality above holds strictly (with instead of ) for all players and all feasible alternative
strategies, then the equilibrium is classified as a strict Nash equilibrium. If instead, for some player, there is exact
equality between and some other strategy in the set , then the equilibrium is classified as a weak Nash
equilibrium.

Examples

Coordination game

A sample coordination game showing relative payoff for player1 / player2 with each combination

Player 2 adopts strategy A Player 2 adopts strategy B


Player 1 adopts strategy A 4, 4 1, 3
Player 1 adopts strategy B 3, 1 3, 3

The coordination game is a classic (symmetric) two player, two strategy game, with an example payoff matrix
shown to the right. The players should thus coordinate, both adopting strategy A, to receive the highest payoff; i.e.,
4. If both players chose strategy B though, there is still a Nash equilibrium. Although each player is awarded less
than optimal payoff, neither player has incentive to change strategy due to a reduction in the immediate payoff (from
3 to 1).
A famous example of this type of game was called the Stag Hunt; in the game two players may choose to hunt a stag
or a rabbit, the former providing more meat (4 utility units) than the latter (1 utility unit). The caveat is that the stag
must be cooperatively hunted, so if one player attempts to hunt the stag, while the other hunts the rabbit, he will fail
in hunting (0 utility units), whereas if they both hunt it they will split the payload (2, 2). The game hence exhibits
two equilibrium at (stag, stag) and (rabbit, rabbit) and hence the players' optimal strategy depend on their expectation
on what the other player may do. If one hunter trusts that the other will hunt the stag, he should hunt the stag;
however if he suspects that the other will hunt the rabbit, he should hunt the rabbit. This game was used as an
analogy for social cooperation, since much of the benefit that people gain in society depends upon people
cooperating and implicitly trusting one another to act in a manner corresponding with cooperation.
Another example of a coordination game is the setting where two technologies are available to two firms with
compatible products, and they have to elect a strategy to become the market standard. If both firms agree on the
chosen technology, high sales are expected for both firms. If the firms do not agree on the standard technology, few
sales result. Both strategies are Nash equilibria of the game.
Nash equilibrium 24

Driving on a road, and having to choose either to drive on the left or to drive on the right of the road, is also a
coordination game. For example, with payoffs 100 meaning no crash and 0 meaning a crash, the coordination game
can be defined with the following payoff matrix:

The driving game

Drive on the Left Drive on the Right


Drive on the Left 100, 100 0, 0
Drive on the Right 0, 0 100, 100

In this case there are two pure strategy Nash equilibria, when both choose to either drive on the left or on the right. If
we admit mixed strategies (where a pure strategy is chosen at random, subject to some fixed probability), then there
are three Nash equilibria for the same case: two we have seen from the pure-strategy form, where the probabilities
are (0%,100%) for player one, (0%, 100%) for player two; and (100%, 0%) for player one, (100%, 0%) for player
two respectively. We add another where the probabilities for each player is (50%, 50%).

Prisoner's dilemma
(note differences in the orientation of the payoff matrix)

Example PD payoff matrix


Cooperate Defect

Cooperate 3, 3 0, 5

Defect 5, 0 1, 1

The Prisoner's Dilemma has the same payoff matrix as depicted for the Coordination Game, but now C > A > D > B.
Because C > A and D > B, each player improves his situation by switching from strategy #1 to strategy #2, no matter
what the other player decides. The Prisoner's Dilemma thus has a single Nash Equilibrium: both players choosing
strategy #2 ("defect"). What has long made this an interesting case to study is the fact that D < A (i.e., "both defect"
is globally inferior to "both remain loyal"). The globally optimal strategy is unstable; it is not an equilibrium.

Network traffic
An application of Nash equilibria is in determining the expected flow
of traffic in a network. Consider the graph on the right. If we assume
that there are "cars" traveling from A to D, what is the expected
distribution of traffic in the network?
This situation can be modeled as a "game" where every traveler has a
choice of 3 strategies, where each strategy is a route from A to D
(either , , or ). The "payoff" of each strategy is the travel time of each
route. In the graph on the right, a car travelling via experiences travel
time of , where is the number of cars traveling on edge . Thus,
Sample network payoffs
graph. Values on edges are the
travel time experienced by a 'car' travelling down
for any given strategy depend on the choices of the other players, as is
that edge. is the number of cars travelling via
usual. However, the goal in this case is to minimize travel time, not that edge.
maximize it. Equilibrium will occur when the time on all paths is

exactly the same. When that happens, no single driver has any incentive to switch routes, since it can only add to
his/her travel time. For the graph on the right, if, for example, 100 cars are travelling from A to D, then equilibrium
Nash equilibrium 25

will occur when 25 drivers travel via , 50 via , and 25 via . Every driver now has a total travel time of 3.75.
Notice that this distribution is not, actually, socially optimal. If the 100 cars agreed that 50 travel via and the
other 50 through , then travel time for any single car would actually be 3.5 which is less than 3.75. This is
also the Nash equilibrium if the path between B and C is removed, which means that adding an additional possible
route can decrease the efficiency of the system, a phenomenon known as Braess's Paradox.

Competition game

A competition game

Player 2 chooses '0' Player 2 chooses '1' Player 2 chooses '2' Player 2 chooses '3'
Player 1 chooses '0' 0, 0 2, -2 2, -2 2, -2
Player 1 chooses '1' -2, 2 1, 1 3, -1 3, -1
Player 1 chooses '2' -2, 2 -1, 3 2, 2 4, 0
Player 1 chooses '3' -2, 2 -1, 3 0, 4 3, 3

This can be illustrated by a two-player game in which both players simultaneously choose an integer from 0 to 3 and
they both win the smaller of the two numbers in points. In addition, if one player chooses a larger number than the
other, then he/she has to give up two points to the other.
This game has a unique pure-strategy Nash equilibrium: both players choosing 0 (highlighted in light red). Any other
choice of strategies can be improved if one of the players lowers his number to one less than the other player's
number. In the table to the right, for example, when starting at the green square it is in player 1's interest to move to
the purple square by choosing a smaller number, and it is in player 2's interest to move to the blue square by
choosing a smaller number. If the game is modified so that the two players win the named amount if they both
choose the same number, and otherwise win nothing, then there are 4 Nash equilibria (0,0...1,1...2,2...and 3,3).

Nash equilibria in a payoff matrix


There is an easy numerical way to identify Nash equilibria on a payoff matrix. It is especially helpful in two-person
games where players have more than two strategies. In this case formal analysis may become too long. This rule
does not apply to the case where mixed (stochastic) strategies are of interest. The rule goes as follows: if the first
payoff number, in the duplet of the cell, is the maximum of the column of the cell and if the second number is the
maximum of the row of the cell - then the cell represents a Nash equilibrium.

A Payoff Matrix - Nash Equlibria in bold

Option A Option B Option C


Option A 0, 0 25, 40 5, 10
Option B 40, 25 0, 0 5, 15
Option C 10, 5 15, 5 10, 10

We can apply this rule to a 3×3 matrix:


Using the rule, we can very quickly (much faster than with formal analysis) see that the Nash Equlibria cells are
(B,A), (A,B), and (C,C). Indeed, for cell (B,A) 40 is the maximum of the first column and 25 is the maximum of the
second row. For (A,B) 25 is the maximum of the second column and 40 is the maximum of the first row. Same for
cell (C,C). For other cells, either one or both of the duplet members are not the maximum of the corresponding rows
and columns.
Nash equilibrium 26

This said, the actual mechanics of finding equilibrium cells is obvious: find the maximum of a column and check if
the second member of the pair is the maximum of the row. If these conditions are met, the cell represents a Nash
Equilibrium. Check all columns this way to find all NE cells. An N×N matrix may have between 0 and N×N pure
strategy Nash equilibria.

Stability
The concept of stability, useful in the analysis of many kinds of equilibria, can also be applied to Nash equilibria.
A Nash equilibrium for a mixed strategy game is stable if a small change (specifically, an infinitesimal change) in
probabilities for one player leads to a situation where two conditions hold:
1. the player who did not change has no better strategy in the new circumstance
2. the player who did change is now playing with a strictly worse strategy.
If these cases are both met, then a player with the small change in his mixed-strategy will return immediately to the
Nash equilibrium. The equilibrium is said to be stable. If condition one does not hold then the equilibrium is
unstable. If only condition one holds then there are likely to be an infinite number of optimal strategies for the player
who changed. John Nash showed that the latter situation could not arise in a range of well-defined games.
In the "driving game" example above there are both stable and unstable equilibria. The equilibria involving
mixed-strategies with 100% probabilities are stable. If either player changes his probabilities slightly, they will be
both at a disadvantage, and his opponent will have no reason to change his strategy in turn. The (50%,50%)
equilibrium is unstable. If either player changes his probabilities, then the other player immediately has a better
strategy at either (0%, 100%) or (100%, 0%).
Stability is crucial in practical applications of Nash equilibria, since the mixed-strategy of each player is not
perfectly known, but has to be inferred from statistical distribution of his actions in the game. In this case unstable
equilibria are very unlikely to arise in practice, since any minute change in the proportions of each strategy seen will
lead to a change in strategy and the breakdown of the equilibrium.
The Nash equilibrium defines stability only in terms of unilateral deviations. In cooperative games such a concept is
not convincing enough. Strong Nash equilibrium allows for deviations by every conceivable coalition.[6] Formally, a
Strong Nash equilibrium is a Nash equilibrium in which no coalition, taking the actions of its complements as given,
can cooperatively deviate in a way that benefits all of its members.[7] However, the Strong Nash concept is
sometimes perceived as too "strong" in that the environment allows for unlimited private communication. In fact,
Strong Nash equilibrium has to be Pareto efficient. As a result of these requirements, Strong Nash almost never
exists.
A refined Nash equilibrium known as coalition-proof Nash equilibrium (CPNE)[6] occurs when players cannot do
better even if they are allowed to communicate and make "self-enforcing" agreement to deviate. Every correlated
strategy supported by iterated strict dominance and on the Pareto frontier is a CPNE.[8] Further, it is possible for a
game to have a Nash equilibrium that is resilient against coalitions less than a specified size, k. CPNE is related to
the theory of the core.
Nash equilibrium 27

Occurrence
If a game has a unique Nash equilibrium and is played among players under certain conditions, then the NE strategy
set will be adopted. Sufficient conditions to guarantee that the Nash equilibrium is played are:
1. The players all will do their utmost to maximize their expected payoff as described by the game.
2. The players are flawless in execution.
3. The players have sufficient intelligence to deduce the solution.
4. The players know the planned equilibrium strategy of all of the other players.
5. The players believe that a deviation in their own strategy will not cause deviations by any other players.
6. There is common knowledge that all players meet these conditions, including this one. So, not only must each
player know the other players meet the conditions, but also they must know that they all know that they meet
them, and know that they know that they know that they meet them, and so on.

Where the conditions are not met


Examples of game theory problems in which these conditions are not met:
1. The first condition is not met if the game does not correctly describe the quantities a player wishes to maximize.
In this case there is no particular reason for that player to adopt an equilibrium strategy. For instance, the
prisoner’s dilemma is not a dilemma if either player is happy to be jailed indefinitely.
2. Intentional or accidental imperfection in execution. For example, a computer capable of flawless logical play
facing a second flawless computer will result in equilibrium. Introduction of imperfection will lead to its
disruption either through loss to the player who makes the mistake, or through negation of the common
knowledge criterion leading to possible victory for the player. (An example would be a player suddenly putting
the car into reverse in the game of chicken, ensuring a no-loss no-win scenario).
3. In many cases, the third condition is not met because, even though the equilibrium must exist, it is unknown due
to the complexity of the game, for instance in Chinese chess.[9] Or, if known, it may not be known to all players,
as when playing tic-tac-toe with a small child who desperately wants to win (meeting the other criteria).
4. The criterion of common knowledge may not be met even if all players do, in fact, meet all the other criteria.
Players wrongly distrusting each other's rationality may adopt counter-strategies to expected irrational play on
their opponents’ behalf. This is a major consideration in “Chicken” or an arms race, for example.

Where the conditions are met


Due to the limited conditions in which NE can actually be observed, they are rarely treated as a guide to day-to-day
behaviour, or observed in practice in human negotiations. However, as a theoretical concept in economics and
evolutionary biology, the NE has explanatory power. The payoff in economics is utility (or sometimes money), and
in evolutionary biology gene transmission, both are the fundamental bottom line of survival. Researchers who apply
games theory in these fields claim that strategies failing to maximize these for whatever reason will be competed out
of the market or environment, which are ascribed the ability to test all strategies. This conclusion is drawn from the
"stability" theory above. In these situations the assumption that the strategy observed is actually a NE has often been
borne out by research.
Nash equilibrium 28

NE and non-credible threats


The Nash equilibrium is a superset of the
subgame perfect Nash equilibrium. The
subgame perfect equilibrium in addition to
the Nash Equilibrium requires that the
strategy also is a Nash equilibrium in every
subgame of that game. This eliminates all
non-credible threats, that is, strategies that
contain non-rational moves in order to make
the counter-player change his strategy.

The image to the right shows a simple


sequential game that illustrates the issue
with subgame imperfect Nash equilibria. In
this game player one chooses left(L) or
right(R), which is followed by player two
being called upon to be kind (K) or unkind
(U) to player one, However, player two only
stands to gain from being unkind if player Extensive and Normal form illustrations that show the difference between SPNE
and other NE. The blue equilibrium is not subgame perfect because player two
one goes left. If player one goes right the
makes a non-credible threat at 2(2) to be unkind (U).
rational player two would de facto be kind
to him in that subgame. However, The
non-credible threat of being unkind at 2(2) is still part of the blue (L, (U,U)) Nash equilibrium. Therefore, if rational
behavior can be expected by both parties the subgame perfect Nash equilibrium may be a more meaningful solution
concept when such dynamic inconsistencies arise.

Proof of existence

Proof using the Kakutani fixed point theorem


Nash's original proof (in his thesis) used Brouwer's fixed point theorem (e.g., see below for a variant). We give a
simpler proof via the Kakutani fixed point theorem, following Nash's 1950 paper (he credits David Gale with the
observation that such a simplification is possible).
To prove the existence of a Nash Equilibrium, let be the best response of player i to the strategies of all
other players.

Here, is a mixed strategy profile in the set of all mixed strategies and it the payoff function for player i.
Define a function such that . The existence of a Nash Equilibrium is equivalent to
having a fixed point.
Kakutani's fixed point theorem guarantees the existence of a fixed point if the following four conditions are satisfied.
1. is compact, convex, and nonempty.
2. is nonempty.
3. is convex.
4. is upper hemicontinuous
Condition 1. is satisfied from the fact that is a simplex and thus compact. Convexity follows from players' ability
to mix strategies. is nonempty as long as players have strategies.
Nash equilibrium 29

Condition 2. is satisfied because players maximize expected payoffs which is continuous function over a compact
set. The Weierstrauss Extreme Value Theorem guarantees that there is always a maximum value.
Condition 3. is satisfied as a result of mixed strategies. Suppose , then
. i.e. if two strategies maximize payoffs, then a mix between the two strategies will
yield the same payoff.
Condition 4. is satisfied by way of Berge's Maximum Theorem. Because is continuos and compact, is
upper hemicontinuous.
Therefore, there exists a fixed point in and a Nash Equilibrium.[10]
When Nash made this point to John von Neumann in 1949, von Neumann famously dismissed it with the words,
"That's trivial, you know. That's just a fixed point theorem." (See Nasar, 1998, p. 94.)

Alternate proof using the Brouwer fixed-point theorem


We have a game where is the number of players and is the action set
for the players. All of the action sets are finite. Let denote the set of mixed strategies
for the players. The finiteness of the s ensures the compactness of .
We can now define the gain functions. For a mixed strategy , we let the gain for player on action
be

The gain function represents the benefit a player gets by unilaterally changing his strategy. We now define
where

for . We see that

We now use to define as follows. Let

for . It is easy to see that each is a valid mixed strategy in . It is also easy to check that each is a
continuous function of , and hence is a continuous function. Now is the cross product of a finite number of
compact convex sets, and so we get that is also compact and convex. Therefore we may apply the Brouwer fixed
point theorem to . So has a fixed point in , call it .
I claim that is a Nash Equilibrium in . For this purpose, it suffices to show that

This simply states the each player gains no benefit by unilaterally changing his strategy which is exactly the
necessary condition for being a Nash Equilibrium.
Now assume that the gains are not all zero. Therefore, , , and such that
. Note then that

So let . Also we shall denote as the gain vector indexed by actions in . Since

we clearly have that . Therefore we see that


Nash equilibrium 30

Since we have that is some positive scaling of the vector . Now I claim that

. To see this, we first note that if then this is true by definition of the gain function.
Now assume that . By our previous statements we have that

and so the left term is zero, giving us that the entire expression is as needed.
So we finally have that

where the last inequality follows since is a non-zero vector. But this is a clear contradiction, so all the gains must
indeed be zero. Therefore is a Nash Equilibrium for as needed.

Computing Nash equilibria


If a player A has a dominant strategy then there exists a Nash equilibrium in which A plays . In the case of
two players A and B, there exists a Nash equilibrium in which A plays and B plays a best response to . If
is a strictly dominant strategy, A plays in all Nash equilibria. If both A and B have strictly dominant
strategies, there exists a unique Nash equilibrium in which each plays his strictly dominant strategy.
In games with mixed strategy Nash equilibria, the probability of a player choosing any particular strategy can be
computed by assigning a variable to each strategy that represents a fixed probability for choosing that strategy. In
order for a player to be willing to randomize, his expected payoff for each strategy should be the same. In addition,
the sum of the probabilities for each strategy of a particular player should be 1. This creates a system of equations
from which the probabilities of choosing each strategy can be derived.[5]
Nash equilibrium 31

Examples

Matching pennies

Player B plays H Player B plays T


Player A plays H −1, +1 +1, −1
Player A plays T +1, −1 −1, +1

In the matching pennies game, player A loses a point to B if A and B play the same strategy and wins a point from B
if they play different strategies. To compute the mixed strategy Nash equilibrium, assign A the probability p of
playing H and (1−p) of playing T, and assign B the probability q of playing H and (1−q) of playing T.
E[payoff for A playing H] = (−1)q + (+1)(1−q) = 1−2q
E[payoff for A playing T] = (+1)q + (−1)(1−q) = 2q−1
E[payoff for A playing H] = E[payoff for A playing T] ⇒ 1−2q = 2q−1 ⇒ q = 1/2
E[payoff for B playing H] = (+1)p + (−1)(1−p) = 2p−1
E[payoff for B playing T] = (−1)p + (+1)(1−p) = 1−2p
E[payoff for B playing H] = E[payoff for B playing T] ⇒ 2p−1 = 1−2p ⇒ p = 1/2
Thus a mixed strategy Nash equilibrium in this game is for each player to randomly choose H or T with equal
probability.

Notes
[1] Osborne, Martin J., and Ariel Rubinstein. A Course in Game Theory. Cambridge, MA: MIT, 1994. Print.
[2] Schelling, Thomas, The Strategy of Conflict (http:/ / books. google. es/ books?id=7RkL4Z8Yg5AC& dq=thomas+ schelling+ strategy+ of+
conflict& printsec=frontcover& source=bn& hl=en& ei=xuSjSbK-I4-O_gai_ticBQ& sa=X& oi=book_result& resnum=4&
ct=result#PPP13,M1), copyright 1960, 1980, Harvard University Press, ISBN 0674840313.
[3] G. De Fraja, T. Oliveira, L. Zanchi (2010), ' Must Try Harder: Evaluating the Role of Effort in Educational Attainment (http:/ / www.
mitpressjournals. org/ doi/ pdf/ 10. 1162/ REST_a_00013)'. "The Review of Economis and Statistics" 92:3. pp. 577-597
[4] P. Chiappori, S. Levitt, and T. Groseclose (2002), ' Testing Mixed-Strategy Equilibria When Players Are Heterogeneous: The Case of Penalty
Kicks in Soccer (http:/ / pricetheory. uchicago. edu/ levitt/ Papers/ ChiapporiGrosecloseLevitt2002. pdf)'. American Economic Review 92, pp.
1138-51.
[5] von Ahn, Luis. "Preliminaries of Game Theory" (http:/ / www. scienceoftheweb. org/ 15-396/ lectures_f11/ lecture08. pdf). . Retrieved
2008-11-07.
[6] B. D. Bernheim, B. Peleg, M. D. Whinston (1987), "Coalition-Proof Equilibria I. Concepts", Journal of Economic Theory 42 (1): 1–12,
doi:10.1016/0022-0531(87)90099-8.
[7] R. Aumann, (1959), Acceptable points in general cooperative n-person games in "Contributions to the Theory of Games IV", Princeton Univ.
Press, Princeton, N.J..
[8] D. Moreno, J. Wooders (1996), "Coalition-Proof Equilibrium", Games and Economic Behavior 17 (1): 80–112, doi:10.1006/game.1996.0095.
[9] Nash proved that a perfect NE exists for this type of finite extensive form game – it can be represented as a strategy complying with his
original conditions for a game with a NE. Such games may not have unique NE, but at least one of the many equilibrium strategies would be
played by hypothetical players having perfect knowledge of all 10150 game trees.
[10] Fudenburg, Drew, and Jean Tirole. Game Theory: Massachusetts Institute of Technology, 1991.
Nash equilibrium 32

References

Game theory textbooks


• Dixit, Avinash and Susan Skeath. Games of Strategy. W.W. Norton & Company. (Second edition in 2004)
• Dutta, Prajit K. (1999), Strategies and games: theory and practice, MIT Press, ISBN 978-0-262-04169-0.
Suitable for undergraduate and business students.
• Fudenberg, Drew and Jean Tirole (1991) Game Theory MIT Press.
• Leyton-Brown, Kevin; Shoham, Yoav (2008), Essentials of Game Theory: A Concise, Multidisciplinary
Introduction (http://www.gtessentials.org), San Rafael, CA: Morgan & Claypool Publishers,
ISBN 978-1-598-29593-1. An 88-page mathematical introduction; see Chapter 2. Free online (http://www.
morganclaypool.com/doi/abs/10.2200/S00108ED1V01Y200802AIM003) at many universities.
• Morgenstern, Oskar and John von Neumann (1947) The Theory of Games and Economic Behavior Princeton
University Press
• Myerson, Roger B. (1997), Game theory: analysis of conflict, Harvard University Press,
ISBN 978-0-674-34116-6
• Rubinstein, Ariel; Osborne, Martin J. (1994), A course in game theory, MIT Press, ISBN 978-0-262-65040-3. A
modern introduction at the graduate level.
• Shoham, Yoav; Leyton-Brown, Kevin (2009), Multiagent Systems: Algorithmic, Game-Theoretic, and Logical
Foundations (http://www.masfoundations.org), New York: Cambridge University Press,
ISBN 978-0-521-89943-7. A comprehensive reference from a computational perspective; see Chapter 3.
Downloadable free online (http://www.masfoundations.org/download.html).
• Gibbons, Robert (1992), Game Theory for Applied Economists, Princeton University Press (July 13, 1992),
ISBN 978-0-691-00395-5. Lucid and detailed introduction to game theory in an explicitly economic context.
• Osborne, Martin, An introduction to game theory, Oxford University. Introduction to Nash equilibrium.

Original Nash papers


• Nash, John (1950) "Equilibrium points in n-person games" Proceedings of the National Academy of Sciences
36(1):48-49.
• Nash, John (1951) "Non-Cooperative Games" The Annals of Mathematics 54(2):286-295.

Other references
• Mehlmann, A. The Game's Afoot! Game Theory in Myth and Paradox, American Mathematical Society (2000).
• Nasar, Sylvia (1998), "A Beautiful Mind", Simon and Schuster, Inc.

External links
• Complete Proof of Existence of Nash Equilibria (http://wiki.cc.gatech.edu/theory/index.php/
Nash_equilibrium)
33

Definitions

Cooperative game
This article is about a part of game theory. For video gaming, see Cooperative gameplay. For the similar
feature in some board games, see cooperative board game
In game theory, a cooperative game is a game where groups of players ("coalitions") may enforce cooperative
behaviour, hence the game is a competition between coalitions of players, rather than between individual players. An
example is a coordination game, when players choose the strategies by a consensus decision-making process.
Recreational games are rarely cooperative, because they usually lack mechanisms by which coalitions may enforce
coordinated behaviour on the members of the coalition. Such mechanisms, however, are abundant in real life
situations (e.g. contract law).

Mathematical definition
A cooperative game is given by specifying a value for every coalition. Formally, the game (coalitional game)
[1]
consists of a finite set of players , called the grand coalition, and a characteristic function
from the set of all possible coalitions of players to a set of payments that satisfies . The function
describes how much collective payoff a set of players can gain by forming a coalition, and the game is sometimes
called a value game or a profit game. The players are assumed to choose which coalitions to form, according to their
estimate of the way the payment will be divided among coalition members.
Conversely, a cooperative game can also be defined with a characteristic cost function satisfying
. In this setting, players must accomplish some task, and the characteristic function represents the cost
of a set of players accomplishing the task together. A game of this kind is known as a cost game. Although most
cooperative game theory deals with profit games, all concepts can easily be translated to the cost setting.

Duality
Let be a profit game. The dual game of is the cost game defined as

Intuitively, the dual game represents the opportunity cost for a coalition of not joining the grand coalition .A
dual profit game can be defined identically for a cost game . A cooperative game and its dual are in some
sense equivalent, and they share many properties. For example, the core of a game and its dual are equal. For more
details on cooperative game duality, see for instance (Bilbao 2000).
Cooperative game 34

Subgames
Let be a non-empty coalition of players. The subgame on is naturally defined as

In other words, we simply restrict our attention to coalitions contained in . Subgames are useful because they
allow us to apply solution concepts defined for the grand coalition on smaller coalitions.

Properties for characterization

Superadditivity
Characteristic functions are often assumed to be superadditive (Owen 1995, p. 213). This means that the value of a
union of disjoint coalitions is no less than the sum of the coalitions' separate values:
whenever satisfy .

Monotonicity
Larger coalitions gain more: . This follows from superadditivity if payoffs are
normalized so singleton coalitions have value zero.

Properties for Simple games


A coalitional game is simple if payoffs are either 1 or 0, i.e., coalitions are either "winning" or "losing".
Equivalently, a simple game can be defined as a collection of coalitions, where the members of are called
winning coalitions, and the others losing coalitions. It is sometimes assumed that a simple game is nonempty or that
it does not contain an empty set. In other areas of mathematics, simple games are also called hypergraphs or Boolean
functions (logic functions).
• A simple game is monotonic if any coalition containing a winning coalition is also winning, that is, if
and imply .
• A simple game is proper if the complement (opposition) of any winning coalition is losing, that is, if
implies .
• A simple game is strong if the complement of any losing coalition is winning, that is, if imples
.
• If a simple game is proper and strong, then a coalition is winning if and only if its complement is losing,
that is, iff . (If is a colitional simple game that is proper and strong,
for any .)
• A veto player (vetoer) in a simple game is a player that belong to all winning coalitions. Supposing there is a veto
player, any coalition not containing a veto player is losing. A simple game is weak (collegial) if it has a veto
player, that is, if the intersection of all winning coalitions is nonempty.

• A dictator in a simple game is a veto player such that any coalition containing this player is winning. The
dictator does not belong to any losing coalition. (Dictator games in experimental economics are unrelated to
this.)
• A carrier of a simple game is a set such that for any coalition , we have iff
. When a simple game has a carrier, any player not belonging to it is ignored. A simple game is
sometimes called finite if it has a finite carrier (even if is infinite).
• The Nakamura number of a simple game is the minimal number of winning coalitions with empty intersection.
The number measures the degree of rationality; it is an indicator of the extent to which an aggregation rule can
Cooperative game 35

yield well-defined choices.

Relation with non-cooperative theory


Let G be a strategic (non-cooperative) game. Then, assuming that coalitions have the ability to enforce coordinated
behaviour, there are several cooperative games associated with G. These games are often referred to as
representations of G.
• The α-effective game associates with each coalition the sum of gains its members can 'guarantee' by joining
forces. By 'guaranteeing', it is meant that the value is the max-min, e.g. the maximal value of the minimum taken
over the opposition's strategies.
• The β-effective game associates with each coalition the sum of gains its members can 'strategically guarantee' by
joining forces. By 'strategically guaranteeing', it is meant that the value is the min-max, e.g. the minimal value of
the maximum taken over the opposition's strategies.

Solution concepts
The main assumption in cooperative game theory is that the grand coalition will form. The challenge is then to
allocate the payoff among the players in some fair way. (This assumption is not restrictive, because even if
players split off and form smaller coalitions, we can apply solution concepts to the subgames defined by whatever
coalitions actually form.) A solution concept is a vector that represents the allocation to each player.
Researchers have proposed different solution concepts based on different notions of fairness. Some properties to
look for in a solution concept include:

• Efficiency: The payoff vector exactly splits the total value: .

• Individual rationality: No player receives less than what he could get on his own: .
• Existence: The solution concept exists for any game .
• Uniqueness: The solution concept is unique for any game .
• Computational ease: The solution concept can be calculated efficiently (i.e. in polynomial time with respect to the
number of players .)
• Symmetry: The solution concept allocates equal payments to symmetric players , . Two
players , are symmetric if ; that is, we can exchange
one player for the other in any coalition that contains only one of the players and not change the payoff.
• Additivity: The allocation to a player in a sum of two games is the sum of the allocations to the player in each
individual game. Mathematically, if and are games, the game simply assigns to any coalition the
sum of the payoffs the coalition would get in the two individual games. An additive solution concept assigns to
every player in the sum of what he would receive in and .
• Zero Allocation to Null Players: The allocation to a null player is zero. A null player satisfies
. In economic terms, a null player's marginal value to any coalition
that does not contain him is zero.
An efficient payoff vector is called a pre-imputation, and an individually rational pre-imputation is called an
imputation. Most solution concepts are imputations.
Cooperative game 36

The stable set


The stable set of a game (also known as the von Neumann-Morgenstern solution (von Neumann & Morgenstern
1944)) was the first solution proposed for games with more than 2 players. Let be a game and let , be two
imputations of . Then dominates if some coalition satisfies and

. In other words, players in prefer the payoffs from to those from , and they can threaten

to leave the grand coalition if is used because the payoff they obtain on their own is at least as large as the
allocation they receive under .
A stable set is a set of imputations that satisfies two properties:
• Internal stability: No payoff vector in the stable set is dominated by another vector in the set.
• External stability: All payoff vectors outside the set are dominated by at least one vector in the set.
Von Neumann and Morgenstern saw the stable set as the collection of acceptable behaviours in a society: None is
clearly preferred to any other, but for each unacceptable behaviour there is a preferred alternative. The definition is
very general allowing the concept to be used in a wide variety of game formats.

Properties
• A stable set may or may not exist (Lucas 1969), and if it exists it is typically not unique (Lucas 1992). Stable sets
are usually difficult to find. This and other difficulties have led to the development of many other solution
concepts.
• A positive fraction of cooperative games have unique stable sets consisting of the core (Owen 1995, p. 240.).
• A positive fraction of cooperative games have stable sets which discriminate players. In such sets at least
of the discriminated players are excluded (Owen 1995, p. 240.).

The core
Let be a game. The core of is the set of payoff vectors

In words, the core is the set of imputations under which no coalition has a value greater than the sum of its members'
payoffs. Therefore, no coalition has incentive to leave the grand coalition and receive a larger payoff.

Properties
• The core of a game may be empty (see the Bondareva-Shapley theorem). Games with non-empty cores are called
balanced.
• If it is non-empty, the core does not necessarily contain a unique vector.
• The core is contained in any stable set, and if the core is stable it is the unique stable set (see (Driessen 1988) for a
proof.)

The core of a simple game with respect to preferences


For simple games, there is another notion of the core, when each player is assumed to have preferences on a set
of alternatives. A profile is a list of individual preferences on . Here means that
individual prefers alternative to at profile . Given a simple game and a profile , a dominance
relation is defined on by if and only if there is a winning coalition (i.e., ) satisfying
for all . The core of the simple game with respect to the profile of preferences is
the set of alternatives undominated by (the set of maximal elements of with respect to ):
if and only if there is no such that .
Cooperative game 37

The Nakamura number of a simple game is the minimal number of winning coalitions with empty intersection.
Nakamura's theorem states that the core is nonempty for all profiles of acyclic (alternatively, transitive)
preferences if and only if is finite and the cardinal number (the number of elements) of is less than the
Nakamura number of . A variant by Kumabe and Mihara states that the core is nonempty for all profiles
of preferences that have a maximal element if and only if the cardinal number of is less than the Nakamura
number of . (See Nakamura number for details.)

The strong epsilon-core


Because the core may be empty, a generalization was introduced in (Shapley & Shubik 1966). The strong -core
for some number is the set of payoff vectors

In economic terms, the strong -core is the set of pre-imputations where no coalition can improve its payoff by
leaving the grand coalition, if it must pay a penalty of for leaving. Note that may be negative, in which case it
represents a bonus for leaving the grand coalition. Clearly, regardless of whether the core is empty, the strong
-core will be non-empty for a large enough value of and empty for a small enough (possibly negative) value of
. Following this line of reasoning, the least-core, introduced in (Maschler, Peleg & Shapley 1979), is the intersection
of all non-empty strong -cores. It can also be viewed as the strong -core for the smallest value of that makes
the set non-empty (Bilbao 2000).

The Shapley value


The Shapley value is the unique payoff vector that is efficient, symmetric, additive, and assigns zero payoffs to
dummy players. It was introduced by Lloyd Shapley (Shapley 1953). The Shapley value of a superadditive game is
individually rational, but this is not true in general (Driessen 1988).

The kernel
Let be a game, and let be an efficient payoff vector. The maximum surplus of player i over
player j with respect to x is

the maximal amount player i can gain without the cooperation of player j by withdrawing from the grand coalition N
under payoff vector x, assuming that the other players in i's withdrawing coalition are satisfied with their payoffs
under x. The maximum surplus is a way to measure one player's bargaining power over another. The kernel of is
the set of imputations x that satisfy

• , and

for every pair of players i and j. Intuitively, player i has more bargaining power than player j with respect to
imputation x if , but player j is immune to player i's threats if , because he can obtain
this payoff on his own. The kernel contains all imputations where no player has this bargaining power over another.
This solution concept was first introduced in (Davis & Maschler 1965).
Cooperative game 38

The nucleolus
Let be a game, and let be a payoff vector. The excess of for a coalition is the
quantity ; that is, the gain that players in coalition can obtain if they withdraw from the grand

coalition under payoff and instead take the payoff .


Now let be the vector of excesses of , arranged in non-increasing order. In other words,
. Notice that is in the core of if and only if it is a pre-imputation and .
To define the nucleolus, we consider the lexicographic ordering of vectors in : For two payoff vectors ,
we say is lexicographically smaller than if for some index , we have and
. (The ordering is called lexicographic because it mimics alphabetical ordering used to arrange
words in a dictionary.) The nucleolus of is the lexicographically minimal imputation, based on this ordering. This
solution concept
Although was firstofintroduced
the definition in (Schmeidler
the nucleolus 1969).(Maschler, Peleg & Shapley 1979) gave a more intuitive
seems abstract,
description: Starting with the least-core, record the coalitions for which the right-hand side of the inequality in the
definition of cannot be further reduced without making the set empty. Continue decreasing the right-hand
side for the remaining coalitions, until it cannot be reduced without making the set empty. Record the new set of
coalitions for which the inequalities hold at equality; continue decreasing the right-hand side of remaining coalitions
and repeat this process as many times as necessary until all coalitions have been recorded. The resulting payoff
vector is the nucleolus.

Properties
• Although the definition does not explicitly state it, the nucleolus is always unique. (See Section II.7 of (Driessen
1988) for a proof.)
• If the core is non-empty, the nucleolus is in the core.
• The nucleolus is always in the kernel, and since the kernel is contained in the bargaining set, it is always in the
bargaining set (see (Driessen 1988) for details.)

Convex cooperative games


Introduced by Shapley in (Shapley 1971), convex cooperative games capture the intuitive property some games have
of "snowballing". Specifically, a game is convex if its characteristic function is supermodular:

It can be shown (see, e.g., Section V.1 of (Driessen 1988)) that the supermodularity of is equivalent to

that is, "the incentives for joining a coalition increase as the coalition grows" (Shapley 1971), leading to the
aforementioned snowball effect. For cost games, the inequalities are reversed, so that we say the cost game is convex
if the characteristic function is submodular.
Cooperative game 39

Properties
Convex cooperative games have many nice properties:
• Supermodularity trivially implies superadditivity.
• Convex games are totally balanced: The core of a convex game is non-empty, and since any subgame of a convex
game is convex, the core of any subgame is also non-empty.
• A convex game has a unique stable set that coincides with its core.
• The Shapley value of a convex game is the center of gravity of its core.
• An extreme point (vertex) of the core can be found in polynomial time using the greedy algorithm: Let
be a permutation of the players, and let be the set of players ordered
through in , for any , with . Then the payoff defined by
is a vertex of the core of . Any vertex of the core can be constructed in
this way by choosing an appropriate permutation .

Similarities and differences with combinatorial optimization


Submodular and supermodular set functions are also studied in combinatorial optimization. Many of the results in
(Shapley 1971) have analogues in (Edmonds 1970), where submodular functions were first presented as
generalizations of matroids. In this context, the core of a convex cost game is called the base polyhedron, because its
elements generalize base properties of matroids.
However, the optimization community generally considers submodular functions to be the discrete analogues of
convex functions (Lovász 1983), because the minimization of both types of functions is computationally tractable.
Unfortunately, this conflicts directly with Shapley's original definition of supermodular functions as "convex".

References
[1] denotes the power set of .
• Bilbao, Jesús Mario (2000), Cooperative Games on Combinatorial Structures, Kluwer Academic Publishers
• Davis, M.; Maschler, M. (1965), "The kernel of a cooperative game", Naval Research Logistics Quarterly 12 (3):
223–259, doi:10.1002/nav.3800120303
• Driessen, Theo (1988), Cooperative Games, Solutions and Applications, Kluwer Academic Publishers
• Edmonds, Jack (1970), "Submodular functions, matroids and certain polyhedra", in Guy, R.; Hanani, H.; Sauer,
N. et al., Combinatorial Structures and Their Applications, New York: Gordon and Breach, pp. 69–87
• Lovász, Lászlo (1983), "Submodular functions and convexity", in Bachem, A.; Grötschel, M.; Korte, B.,
Mathematical Programming—The State of the Art, Berlin: Springer, pp. 235–257
• Leyton-Brown, Kevin; Shoham, Yoav (2008), Essentials of Game Theory: A Concise, Multidisciplinary
Introduction (http://www.gtessentials.org), San Rafael, CA: Morgan & Claypool Publishers,
ISBN 978-1-598-29593-1. An 88-page mathematical introduction; see Chapter 8. Free online (http://www.
morganclaypool.com/doi/abs/10.2200/S00108ED1V01Y200802AIM003) at many universities.
• Lucas, William F. (1969), "The Proof That a Game May Not Have a Solution", Transactions of the American
Mathematical Society (American Mathematical Society) 136: 219–229, doi:10.2307/1994798, JSTOR 1994798.
• Lucas, William F. (1992), "Von Neumann-Morgenstern Stable Sets", Handbook of Game Theory, Volume I,
Amsterdam: Elsevier, pp. 543–590
• Luce, R.D. and Raiffa, H. (1957) Games and Decisions: An Introduction and Critical Survey, Wiley & Sons. (see
Chapter 8).
• Maschler, M.; Peleg, B.; Shapley, Lloyd S. (1979), "Geometric properties of the kernel, nucleolus, and related
solution concepts", Mathematics of Operations Research 4 (4): 303–338, doi:10.1287/moor.4.4.303
• Osborne, M.J. and Rubinstein, A. (1994) A Course in Game Theory, MIT Press (see Chapters 13,14,15)
Cooperative game 40

• Moulin, Herve (1988), Axioms of Cooperative Decision Making (1st ed.), Cambridge: Cambridge University
Press, ISBN 0-521-42458-5
• Owen, Guillermo (1995), Game Theory (3rd ed.), San Diego: Academic Press, ISBN 0-12-531151-6
• Schmeidler, D. (1969), "The nucleolus of a characteristic function game", SIAM Journal of Applied Mathematics
17 (6): 1163–1170, doi:10.1137/0117107.
• Shapley, Lloyd S. (1953), "A value for -person games", in Kuhn, H.; Tucker, A.W., Contributions to the
Theory of Games II, Princeton, New Jersey: Princeton University Press, pp. 307–317
• Shapley, Lloyd S. (1971), "Cores of convex games", International Journal of Game Theory 1 (1): 11–26,
doi:10.1007/BF01753431
• Shapley, Lloyd S.; Shubik, M. (1966), "Quasi-cores in a monetary economy with non-convex preferences",
Econometrica (The Econometric Society) 34 (4): 805–827, doi:10.2307/1910101, JSTOR 1910101
• Shoham, Yoav; Leyton-Brown, Kevin (2009), Multiagent Systems: Algorithmic, Game-Theoretic, and Logical
Foundations (http://www.masfoundations.org), New York: Cambridge University Press,
ISBN 978-0-521-89943-7. A comprehensive reference from a computational perspective; see Chapter 12.
Downloadable free online (http://www.masfoundations.org/download.html).
• von Neumann, John; Morgenstern, Oskar (1944), Theory of Games and Economic Behavior, Princeton: Princeton
University Press

Information set
In game theory, an information set is a set that, for a particular player, establishes all the possible moves that could
have taken place in the game so far, given what that player has observed. If the game has perfect information, every
information set contains only one member, namely the point actually reached at that stage of the game. Otherwise, it
is the case that some players cannot be sure exactly what has taken place so far in the game and what their position
is.
More specifically, in the extensive form, an information set is a set of decision nodes such that:
1. Every node in the set belongs to one player.
2. When play reaches the information set, the player with the move cannot differentiate between nodes within the
information set, i.e. if the information set contains more than one node, the player to whom that set belongs does
not know which node in the set has been reached.
The notion of information set was introduced by John von Neuman motivated by studying the game of Poker.

Example
At the right are two versions of the
battle of the sexes game, shown in
extensive form.
The first game is simply
sequential-when player 2 has the chance
to move, he or she is aware of whether
player 1 has chosen O(pera) or
F(ootball).
The second game is also sequential, but
the dotted line shows player 2's
Information set 41

information set. This is the common


way to show that when player 2 moves,
he or she is not aware of what player 1
did.
This difference also leads to different
predictions for the two games. In the
first game, player 1 has the upper hand.
They know that they can choose
O(pera) safely because once player 2
knows that player 1 has chosen opera,
player 2 would rather go along for o(pera) and get 2 than choose f(ootball) and get 0. Formally, that's applying
subgame perfection to solve the game.

In the second game, player 2 can't observe what player 1 did, so it might as well be a simultaneous game. So
subgame perfection doesn't get us anything that Nash equilibrium can't get us, and we have the standard 3 possible
equilibria:
1. Both choose opera;
2. both choose football;
3. or both use a mixed strategy, with player 1 choosing O(pera) 3/5 of the time, and player 2 choosing f(ootball) 3/5
of the time.

References
• Ken Binmore, Game Theory--A very short introduction, ISBN 0199218463, Oxford University Press, pp. 88-89
Preference 42

Preference
Definitions in different disciplines
The term “preferences” is used in a variety of related, but not identical, ways in the scientific literature. This makes it
necessary to make explicit the sense in which the term is used in different social sciences.
In psychology, preferences could be conceived of as an individual’s attitude towards a set of objects, typically
reflected in an explicit decision-making process (Lichtenstein & Slovic, 2006). Alternatively, one could interpret the
term “preference” to mean evaluative judgment in the sense of liking or disliking an object (e.g., Scherer, 2005)
which is the most typical definition employed in psychology. However, it does not mean that a preference is
necessarily stable over time. Preference can be notably modified by decision-making processes, such as choices
(Brehm, 1956; Sharot, De Martino, & Dolan, 2009), even in an unconscious way (see Coppin, Delplanque, Cayeux,
Porcherot, & Sander, 2010).
"Preference" may also refer to non-choices, such as genetic and biological explanations for one's preference. Sexual
orientation, for example, is no longer considered a sexual preference by most individuals, but is debatable based on
philosophical and/or scientific ideas.

References
• Brehm, J.W. (1956). Post-decision changes in desirability of choice alternatives. Journal of Abnormal and Social
Psychology, 52, 384-389.
• Coppin, G., Delplanque, S., Cayeux, I., Porcherot, C., & Sander, D. (2010). I’m no longer torn after choice: How
explicit choices can implicitly shape preferences for odors. Psychological Science, 21, 489-493.
• Lichtenstein, S., & Slovic, P. (2006). The construction of preference. New York: Cambridge University Press.
• Scherer, K.R. (2005). What are emotions? And how can they be measured? Social Science Information, 44,
695-729.
• Sharot, T., De Martino, B., & Dolan, R.J. (2009). How choice reveals and shapes expected hedonic outcome.
Journal of Neuroscience, 29, 3760-3765.

External links
• Stanford Encyclopedia of Philosophy article on 'Preferences' (http://plato.stanford.edu/entries/preferences/)
• Customer preference formation (http://www.icrsurvey.com/docs/Customer Preference Formation_1205.doc)
DOC (white paper from ICR)
Normal-form game 43

Normal-form game
In game theory, normal form is a way of describing a game. Unlike extensive form, normal-form representations
are not graphical per se, but rather represent the game by way of a matrix. While this approach can be of greater use
in identifying strictly dominated strategies and Nash equilibria, some information is lost as compared to
extensive-form representations. The normal-form representation of a game includes all perceptible and conceivable
strategies, and their corresponding payoffs, of each player.
In static games of complete, perfect information, a normal-form representation of a game is a specification of
players' strategy spaces and payoff functions. A strategy space for a player is the set of all strategies available to that
player, where a strategy is a complete plan of action for every stage of the game, regardless of whether that stage
actually arises in play. A payoff function for a player is a mapping from the cross-product of players' strategy spaces
to that player's set of payoffs (normally the set of real numbers, where the number represents a cardinal or ordinal
utility—often cardinal in the normal-form representation) of a player, i.e. the payoff function of a player takes as its
input a strategy profile (that is a specification of strategies for every player) and yields a representation of payoff as
its output.

An example

A normal-form game
Player 1 \ Player 2 Player 2 chooses left Player 2 chooses right

Player 1 chooses top 4, 3 −1, −1

Player 1 chooses bottom 0, 0 3, 4

The matrix to the right is a normal-form representation of a game in which players move simultaneously (or at least
do not observe the other player's move before making their own) and receive the payoffs as specified for the
combinations of actions played. For example, if player 1 plays top and player 2 plays left, player 1 receives 4 and
player 2 receives 3. In each cell, the first number represents the payoff to the row player (in this case player 1), and
the second number represents the payoff to the column player (in this case player 2).

Other representations
Often symmetric games (where the payoffs do not depend on which player chooses each action) are represented with
only one payoff. This is the payoff for the row player. For example, the payoff matrices on the right and left below
represent the same game.

Stag Hare Stag Hare


Stag 3, 3 0, 2 Stag 3 0
Hare 2, 0 2, 2 Hare 2 2
Normal-form game 44

Uses of normal form

Dominated strategies

The Prisoner's Dilemma

Cooperate Defect
Cooperate −1, −1 −5, 0
Defect 0, −5 −2, −2

The payoff matrix facilitates elimination of dominated strategies, and it is usually used to illustrate this concept. For
example, in the prisoner's dilemma (to the right), we can see that each prisoner can either "cooperate" or "defect". If
exactly one prisoner defects, he gets off easily and the other prisoner is locked up for good. However, if they both
defect, they will both be locked up for longer. One can determine that Cooperate is strictly dominated by Defect.
One must compare the first numbers in each column, in this case 0 > −1 and −2 > −5. This shows that no matter what
the column player chooses, the row player does better by choosing Defect. Similarly, one compares the second
payoff in each row; again 0 > −1 and −2 > −5. This shows that no matter what row does, column does better by
choosing Defect. This demonstrates the unique Nash equilibrium of this game is (Defect, Defect).

Sequential games in normal form

Both extensive and normal form illustration of a sequential form game with
subgame imperfect and perfect Nash equilibriium marked with red and blue
respectively.
Normal-form game 45

A sequential game

Left, Left Left, Right Right, Left Right, Right


Top 4, 3 4, 3 −1, −1 −1, −1
Bottom 0, 0 3, 4 0, 0 3, 4

These matrices only represent games in which moves are simultaneous (or, more generally, information is
imperfect). The above matrix does not represent the game in which player 1 moves first, observed by player 2, and
then player 2 moves, because it does not specify each of player 2's strategies in this case. In order to represent this
sequential game we must specify all of player 2's actions, even in contingencies that can never arise in the course of
the game. In this game, player 2 has actions, as before, Left and Right. Unlike before he has four strategies,
contingent on player 1's actions. The strategies are:
1. Left if player 1 plays Top and Left otherwise
2. Left if player 1 plays Top and Right otherwise
3. Right if player 1 plays Top and Left otherwise
4. Right if player 1 plays Top and Right otherwise
On the right is the normal-form representation of this game.

General formulation
In order for a game to be in normal form, we are provided with the following data:
• There is a finite set P of players, which we label {1, 2, ..., m}
• Each player k in P has a finite number of pure strategies

A pure strategy profile is an association of strategies to players, that is an m-tuple

such that

A payoff function is a function

whose intended interpretation is the award given to a single player at the outcome of the game. Accordingly, to
completely specify a game, the payoff function has to be specified for each player in the player set P= {1, 2, ..., m}.
Definition: A game in normal form is a structure

where:

is a set of players,

is an m-tuple of pure strategy sets, one for each player, and

is an m-tuple of payoff functions.


Normal-form game 46

References
• D. Fudenberg and J. Tirole, Game Theory, MIT Press, 1991.
• Leyton-Brown, Kevin; Shoham, Yoav (2008), Essentials of Game Theory: A Concise, Multidisciplinary
Introduction [1], San Rafael, CA: Morgan & Claypool Publishers, ISBN 978-1-598-29593-1. An 88-page
mathematical introduction; free online [2] at many universities.
• R. D. Luce and H. Raiffa, Games and Decisions, Dover Publications, 1989.
• Shoham, Yoav; Leyton-Brown, Kevin (2009), Multiagent Systems: Algorithmic, Game-Theoretic, and Logical
Foundations [3], New York: Cambridge University Press, ISBN 978-0-521-89943-7. A comprehensive reference
from a computational perspective; see Chapter 3. Downloadable free online [4].
• J. Weibull, Evolutionary Game Theory, MIT Press, 1996
• J. von Neumann and O. Morgenstern, Theory of games and Economic Behavior, John Wiley Science Editions,
1964. This book was initially published by Princeton University Press in 1944.

External links
• http://www.whalens.org/Sofia/choice/matrix.htm

References
[1] http:/ / www. gtessentials. org
[2] http:/ / www. morganclaypool. com/ doi/ abs/ 10. 2200/ S00108ED1V01Y200802AIM003
[3] http:/ / www. masfoundations. org
[4] http:/ / www. masfoundations. org/ download. html

Extensive-form game
An extensive-form game is a specification of a game in game theory, allowing (as the name suggests) explicit
representation of a number of important aspects, like the sequencing of players' possible moves, their choices at
every decision point, the (possibly imperfect) information each player has about the other player's moves when he
makes a decision, and his payoffs for all possible game outcomes. Extensive-form games also allow representation
of incomplete information in the form of chance events encoded as "moves by nature".

Finite extensive-form games


Some authors, particularly in introductory textbooks, initially define the extensive-form game as being just a game
tree with payoffs (no imperfect or incomplete information), and add the other elements in subsequent chapters as
refinements. Whereas the rest of this article follows this gentle approach with motivating examples, we present
upfront the finite extensive-form games as (ultimately) constructed here. This general definition was introduced by
Harold W. Kuhn in 1953, who extended an earlier definition of von Neumann from 1928. Following the presentation
from Hart (1992), an n-player extensive-form game thus consists of the following:
• A finite set of n (rational) players
• A rooted tree, called the game tree
• Each terminal (leaf) node of the game tree has an n-tuple of payoffs, meaning there is one payoff for each player
at the end of every possible play
• A partition of the non-terminal nodes of the game tree in n+1 subsets, one for each (rational) player, and with a
special subset for a fictitious player called Chance (or Nature). Each player's subset of nodes is referred to as the
"nodes of the player". (A game of complete information thus has an empty set of Chance nodes.)
Extensive-form game 47

• Each node of the Chance player has a probability distribution over its outgoing edges.
• Each set of nodes of a rational player is further partitioned in information sets, which make certain choices
indistinguishable for the player when making a move, in the sense that:
• there is a one-to-one correspondence between outgoing edges of any two nodes of the same information
set—thus the set of all outgoing edges of an information set is partitioned in equivalence classes, each class
representing a possible choice for a player's move at some point—, and
• every (directed) path in the tree from the root to a terminal node can cross each information set at most once
• the complete description of the game specified by the above parameters is common knowledge among the players
A play is thus a path through the tree from the root to a terminal node. At any given non-terminal node belonging to
Chance, an outgoing branch is chosen according to the probability distribution. At any rational player's node, the
player must choose one of the equivalence classes for the edges, which determines precisely one outgoing edge
except (in general) the player doesn't know which one is being followed. (An outside observer knowing every other
player's choices up to that point, and the realization of Nature's moves, can determine the edge precisely.) A pure
strategy for a player thus consists of a selection—choosing precisely one class of outgoing edges for every
information set (of his). In a game of perfect information, the information sets are singletons. It's less evident how
payoffs should be interpreted in games with Chance nodes. It is assumed that each player has a von
Neumann–Morgenstern utility function defined for every game outcome; this assumption entails that every rational
player will evaluate an a priori random outcome by its expected utility.
The above presentation, while precisely defining the mathematical structure over which the game is played, elides
however the more technical discussion of formalizing statements about how the game is played like "a player cannot
distinguish between nodes in the same information set when making a decision". These can be made precise using
epistemic modal logic; see Shoham & Leyton-Brown (2009, chpt. 13) for details.
A perfect information two-player game over a game tree (as defined in combinatorial game theory and artificial
intelligence), for instance chess, can be represented as an extensive form game as defined with the same game tree
and the obvious payoffs for win/lose/draw outcomes. A game over an expectminimax tree, like that of backgammon,
has no imperfect information (all information sets are singletons) but has Chance moves. As further examples,
various variants of poker have both chance moves (the cards being dealt, initially and possibly subsequently
depending on the poker variant, e.g. in draw poker there are additional Chance nodes besides the initial one), and
also have imperfect information (some or all the cards held by other players, again depending on the Poker variant;
see community card poker). (Binmore 2007, chpt. 2)

Perfect and complete information


A complete extensive-form representation specifies:
1. the players of a game
2. for every player every opportunity they have to move
3. what each player can do at each of their moves
4. what each player knows for every move
5. the payoffs received by every player for every possible combination of moves
Extensive-form game 48

The game on the right has two players: 1 and 2.


The numbers by every non-terminal node indicate
to which player that decision node belongs. The
numbers by every terminal node represent the
payoffs to the players (e.g. 2,1 represents a payoff
of 2 to player 1 and a payoff of 1 to player 2). The
labels by every edge of the graph are the name of
the action that that edge represents.
The initial node belongs to player 1, indicating that
player 1 moves first. Play according to the tree is
as follows: player 1 chooses between U and D;
player 2 observes player 1's choice and then
chooses between U' and D' . The payoffs are as
specified in the tree. There are four outcomes
represented by the four terminal nodes of the tree:
(U,U'), (U,D'), (D,U') and (D,D'). The payoffs
A game represented in extensive form
associated with each outcome respectively are as
follows (0,0), (2,1), (1,2) and (3,1).

If player 1 plays D, player 2 will play U' to maximise his payoff and so player 1 will only receive 1. However, if
player 1 plays U, player 2 maximises his payoff by playing D' and player 1 receives 2. Player 1 prefers 2 to 1 and so
will play U and player 2 will play D' . This is the subgame perfect equilibrium.

Imperfect information
An advantage of representing the game in this way is that it is clear what the order of play is. The tree shows clearly
that player 1 moves first and player 2 observes this move. However, in some games play does not occur like this.
One player does not always observe the choice of another (for example, moves may be simultaneous or a move may
be hidden). An information set is a set of decision nodes such that:
1. Every node in the set belongs to one player.
2. When play reaches the information set, the player with the move cannot differentiate between nodes within the
information set; i.e. if the information set contains more than one node, the player to whom that set belongs does
not know which node in the set has been reached.
In extensive form, an information set is indicated by a dotted line connecting all nodes in that set or sometimes by a
loop drawn around all the nodes in that set.
Extensive-form game 49

If a game has an information set with more


than one member that game is said to have
imperfect information. A game with
perfect information is such that at any
stage of the game, every player knows
exactly what has taken place earlier in the
game; i.e. every information set is a
singleton set. Any game without perfect
information has imperfect information.

The game on the left is the same as the


above game except that player 2 does not
know what player 1 does when he comes to
play. The first game described has perfect
information; the game on the left does not.
If both players are rational and both know
that both players are rational and everything
that is known by any player is known to be
A game with imperfect information represented in extensive form known by every player (i.e. player 1 knows
player 2 knows that player 1 is rational and
player 2 knows this, etc. ad infinitum), play in the first game will be as follows: player 1 knows that if he plays U,
player 2 will play D' (because for player 2 a payoff of 1 is preferable to a payoff of 0) and so player 1 will receive 2.
However, if player 1 plays D, player 2 will play U' (because to player 2 a payoff of 2 is better than a payoff of 1) and
player 1 will receive 1. Hence, in the first game, the equilibrium will be (U, D' ) because player 1 prefers to receive 2
to 1 and so will play U and so player 2 will play D' .

In the second game it is less clear: player 2 cannot observe player 1's move. Player 1 would like to fool player 2 into
thinking he has played U when he has actually played D so that player 2 will play D' and player 1 will receive 3. In
fact in the second game there is a perfect Bayesian equilibrium where player 1 plays D and player 2 plays U' and
player 2 holds the belief that player 1 will definitely play D. In this equilibrium, every strategy is rational given the
beliefs held and every belief is consistent with the strategies played. Notice how the imperfection of information
changes the outcome of the game.
In games with infinite action spaces and imperfect information, non-singleton information sets are represented, if
necessary, by inserting a dotted line connecting the (non-nodal) endpoints behind the arc described above or by
dashing the arc itself. In the Stackelberg game described above, if the second player had not observed the first
player's move the game would no longer fit the Stackelberg model; it would be Cournot competition.

Incomplete information
It may be the case that a player does not know exactly what the payoffs of the game are or of what type his
opponents are. This sort of game has incomplete information. In extensive form it is represented as a game with
complete but imperfect information using the so called Harsanyi transformation. This transformation introduces to
the game the notion of nature's choice or God's choice. Consider a game consisting of an employer considering
whether to hire a job applicant. The job applicant's ability might be one of two things: high or low. His ability level
is random; he is low ability with probability 1/3 and high ability with probability 2/3. In this case, it is convenient to
model nature as another player of sorts who chooses the applicant's ability according to those probabilities. Nature
however does not have any payoffs. Nature's choice is represented in the game tree by a non-filled node. Edges
coming from a nature's choice node are labelled with the probability of the event it represents occurring.
Extensive-form game 50

The game on the left is one of complete


information (all the players and payoffs are
known to everyone) but of imperfect
information (the employer doesn't know
what was nature's move.) The initial node is
in the centre and it is not filled, so nature
moves first. Nature selects with the same
probability the type of player 1 (which in
this game is tantamount to selecting the
payoffs in the subgame played), either t1 or
t2. Player 1 has distinct information sets for
these; i.e. player 1 knows what type he is
(this need not be the case). However, player
2 does not observe nature's choice. He does
A game with complete but imperfect information represented in extensive form
not know the type of player 1; however, in
this game he does observe player 1's actions;
i.e. there is perfect information. Indeed, it is now appropriate to alter the above definition of perfect information: at
every stage in the game, every player knows what has been played by the other players. In the case of complete
information, every player knows what has been played by nature. Information sets are represented as before by
broken lines.

In this game, if nature selects t1 as player 1's type, the game played will be like the very first game described, except
that player 2 does not know it (and the very fact that this cuts through his information sets disqualify it from
subgame status). There is one separating perfect Bayesian equilibrium; i.e. an equilibrium in which different types
do different things.
If both types play the same action (pooling), an equilibrium cannot be sustained. If both play D, player 2 can only
form the belief that he is on either node in the information set with probability 1/2 (because this is the chance of
seeing either type). Player 2 maximises his payoff by playing D' . However, if he plays D' , type 2 would prefer to
play U. This cannot be an equilibrium. If both types play U, player 2 again forms the belief that he is at either node
with probability 1/2. In this case player 2 plays D' , but then type 1 prefers to play D.
If type 1 plays U and type 2 plays D, player 2 will play D' whatever action he observes, but then type 1 prefers D.
The only equilibrium hence is with type 1 playing D, type 2 playing U and player 2 playing U' if he observes D and
randomising if he observes U. Through his actions, player 1 has signalled his type to player 2.

Formal definition
Formally, a finite game in extensive form is a structure where:
• is a finite tree with a set of nodes , a unique initial node , a set of terminal
nodes (let be a set of decision nodes) and an immediate predecessor function
on which the rules of the game are represented,
• is a partition of called an information partition,
• is a set of actions available for each information set which forms a partition on the set of all
actions .
• is an action partition corresponding each edge to a single action , fulfilling:
, restriction of on is a bijection.
• is a finite set of players, is (a special player called) nature, and is a player
partition of information set . Let be a single player that makes a move at node .
Extensive-form game 51

• is a family of probabilities of the actions of nature, and


• is a payoff profile function.

Infinite action space


It may be that a player has an infinite number of possible actions to choose from at a particular decision node. The
device used to represent this is an arc joining two edges protruding from the decision node in question. If the action
space is a continuum between two numbers, the lower and upper delimiting numbers are placed at the bottom and
top of the arc respectively, usually with a variable that is used to express the payoffs. The infinite number of decision
nodes that could result are represented by a single node placed in the centre of the arc. A similar device is used to
represent action spaces that, whilst not infinite, are large enough to prove impractical to represent with an edge for
each action.

A game with infinite action spaces represented in extensive form

The tree on the left represents such a game, either with infinite action spaces (any real number between 0 and 5000)
or with very large action spaces (perhaps any integer between 0 and 5000). This would be specified elsewhere. Here,
it will be supposed that it is the latter and, for concreteness, it will be supposed it represents two firms engaged in
Stackelberg competition. The payoffs to the firms are represented on the left, with q1 and q2 as the strategy they
adopt and c1 and c2 as some constants (here marginal costs to each firm). The subgame perfect Nash equilibria of
this game can be found by taking the first partial derivative (reference?) of each payoff function with respect to the
follower's (firm 2) strategy variable (q2) and finding its best response function, .
The same process can be done for the leader except that in calculating its profit, it knows that firm 2 will play the
above response and so this can be substituted into its maximisation problem. It can then solve for q1 by taking the
first derivative, yielding . Feeding this into firm 2's best response function,
and (q1*,q2*) is the subgame perfect Nash equilibrium.

References
• Hart, Sergiu (1992). "Games in extensive and strategic forms". In Aumann, Robert; Hart, Sergiu. Handbook of
Game Theory with Economic Applications. 1. Elsevier. ISBN 9780444880987.
• Binmore, Kenneth (2007). Playing for real: a text on game theory. Oxford University Press US.
ISBN 9780195300574.
• Dresher M. (1961). The mathematics of games of strategy: theory and applications (Ch4: Games in extensive
form, pp74–78). Rand Corp. ISBN 0-486-64216-X
• Fudenberg D and Tirole J. (1991) Game theory (Ch3 Extensive form games, pp67–106). Mit press. ISBN
0-262-06141-4
• Leyton-Brown, Kevin; Shoham, Yoav (2008), Essentials of Game Theory: A Concise, Multidisciplinary
Introduction [1], San Rafael, CA: Morgan & Claypool Publishers, ISBN 978-1-598-29593-1. An 88-page
mathematical introduction; see Chapters 4 and 5. Free online [2] at many universities.
Extensive-form game 52

• Luce R.D. and Raiffa H. (1957). Games and decisions: introduction and critical survey. (Ch3: Extensive and
Normal Forms, pp39–55). Wiley New York. ISBN 0-486-65943-7
• Osborne MJ and Rubinstein A. 1994. A course in game theory (Ch6 Extensive game with perfect information,
pp. 89–115). MIT press. ISBN 0-262-65040-1
• Shoham, Yoav; Leyton-Brown, Kevin (2009), Multiagent Systems: Algorithmic, Game-Theoretic, and Logical
Foundations [3], New York: Cambridge University Press, ISBN 978-0-521-89943-7. A comprehensive reference
from a computational perspective; see Chapter 5. Downloadable free online [4].

Further reading
• Horst Herrlich (2006). Axiom of choice. Springer. ISBN 9783540309895., 6.1, "Disasters in Game Theory" and
7.2 "Measurability (The Axiom of Determinateness)", discusses problems in extending the finite-case definition
to infinite number of options (or moves)
Historical papers
• Neumann, J. (1928). "Zur Theorie der Gesellschaftsspiele". Mathematische Annalen 100: 295–320.
doi:10.1007/BF01448847.
• Harold William Kuhn (2003). Lectures on the theory of games. Princeton University Press.
ISBN 9780691027722. contains Kuhn's lectures at Princeton from 1952 (officially unpublished previously, but in
circulation as photocopies)

Succinct game
Consider a game of three players, I,II and III, facing, respectively, the strategies {T,B}, {L,R}, and {l,r}. Without further constraints, 3*23=24 utility values would be
required to describe such a game.

L, l L, r R, l R, r

T 4, 6, 2 5, 5, 5 8, 1, 7 1, 4, 9

B 8, 6, 6 7, 4, 7 9, 6, 5 0, 3, 0

For each strategy profile, the utility of the first player is listed first (red), and is followed by the utilities of the second player (green) and the third player (blue).

In algorithmic game theory, a succinct game or a succinctly representable game is a game which may be
represented in a size much smaller than its normal form representation. Without placing constraints on player
utilities, describing a game of players, each facing strategies, requires listing utility values. Even trivial
algorithms are capable of finding a Nash equilibrium in a time polynomial in the length of such a large input. A
succinct game is of polynomial type if in a game represented by a string of length n the number of players, as well as
the number of strategies of each player, is bounded by a polynomial in n[1] (a formal definition, describing succinct
games as a computational problem, is given by Papadimitriou & Roughgarden 2008[2] ).

Types of succinct games

Graphical games

Say that each player's utility depends only on his own action and the action of one other player - for instance, I depends on II, II on III and III on I. Representing such a
game would require only three 2x2 utility tables, containing in all only 12 utility values.
Succinct game 53

L R

T 9 8

B 3 4

l r

L 6 8

R 1 3

T B

l 4 4

r 5 7

Graphical games are games in which the utilities of each player depends on the actions of very few other players. If
is the greatest number of players by whose actions any single player is affected (that is, it is the indegree of the
game graph), the number of utility values needed to describe the game is , which, for a small is a
considerable improvement.
It has been shown that any normal form game is reducible to a graphical game with all degrees bounded by three and
with two strategies for each player.[3] Unlike normal form games, the problem of finding a pure Nash equilibrium in
graphical games (if one exists) is NP-complete.[4] The problem of finding a (possibly mixed) Nash equilibrium in a
graphical game is PPAD-complete.[5] Finding a correlated equilibrium of a graphical game can be done in
polynomial time, and for a graph with a bounded tree-width, this is also true for finding an optimal correlated
equilibrium.[2]

Sparse games

When most of the utilities are 0, as below, it is easy to come up with a succinct representation.

L, l L, r R, l R, r

T 0, 0, 0 2, 0, 1 0, 0, 0 0, 7, 0

B 0, 0, 0 0, 0, 0 2, 0, 3 0, 0, 0

Sparse games are those where most of the utilities are zero. Graphical games may be seen as a special case of sparse
games.
For a two player game, a sparse game may be defined as a game in which each row and column of the two payoff
(utility) matrices has at most a constant number of non-zero entries. It has been shown that finding a Nash
equilibrium in such a sparse game is PPAD-hard, and that there does not exist a fully polynomial-time
approximation scheme unless PPAD is in P.[6]
Succinct game 54

Symmetric games

Suppose all three players are


identical (we'll color them all
purple), and face the strategy set
{T,B}. Let #TP and #BP be the
number of a player's peers
who've chosen T and B,
respectively. Describing this
game requires only 6 utility
values.

#TP=2 #TP=1 #TP=0


#TB=0 #TB=1 #TB=2

T 5 2 2

B 1 7 2

In symmetric games all players are identical, so in evaluating the utility of a combination of strategies, all that
matters is how many of the players play each of the strategies. Thus, describing such a game requires giving
only utility values.

In a symmetric game with 2 strategies there always exists a pure Nash equilibrium.[7] The problem of finding a pure
Nash equilibrium in a symmetric game (with possibly more than two players) with a constant number of actions is in
AC0; however, when the number of actions grows with the number of players (even linearly) the problem is
NP-complete.[8] In any symmetric game there exists a symmetric equilibrium. Given a symmetric game of n players
facing k strategies, a symmetric equilibrium may be found in polynomial time if k= .[9]
Finding a correlated equilibrium in symmetric games may be done in polynomial time.[2]

Anonymous games

If players were different but did


not distinguish between other
players we would need to list 18
utility values to represent - that
is, one table such as that given
for "symmetric games" above
for each player.

#TP=2 #TP=1 #TP=0


#TB=0 #TB=1 #TB=2

T 8, 8, 2 2, 9, 5 4, 1, 4

B 6, 1, 3 2, 2, 1 7, 0, 6

In anonymous games, players have different utilities but do not distinguish between other players (for instance,
having to choose between "go to cinema" and "go to bar" while caring only how crowded will each place be, not
who'll they meet there). In such a game a player's utility again depends on how many of his peers choose which
strategy, and his own, so utility values are required.

If the number of actions grows with the number of players, finding a pure Nash equilibrium in an anonymous game
is NP-hard.[8] An optimal correlated equilibrium of an anonymous game may be found in polynomial time.[2] When
the number of strategies is 2, there is a known PTAS for finding an ε-approximate Nash equilibrium.[10]
Succinct game 55

Polymatrix games

If the game in question was a polymatrix game, describing it would require 24 utility values. For simplicity's sake, let us examine only the utilities of player I (we would
need two more such tables for each of the other players).

L R

T 4, 6 8, 7

B 3, 7 9, 1

l r

T 7, 1,
7 6

B 8, 6,
6 4

l r

T 2, 3,
9 3

B 2, 1,
4 5

If strategy profile (B,R,l) was chosen, player I's utility would be 9+8=17, player II's utility would be 1+2=3, and player III's utility would be 6+4=10.

In a polymatrix game (also known as a multimatrix game), there is a utility matrix for every pair of players (i,j),
denoting a component of player i's utility. Player i's final utility is the sum of all such components. The number of
utilities values required to represent such a game is .
Polymatrix games always have at least one mixed Nash equilibrium.[11] The problem of finding a Nash equilibrium
in a polymatrix game is PPAD-complete.[5] Finding a correlated equilibrium of a polymatrix game can be done in
polynomial time.[2]

Circuit games

Let us now equate the players' various strategies with the Boolean values "0" and "1", and let X stand for player I's choice, Y for player II's choice and Z for player III's
choice. Let us assign each player a circuit:

Player I: X ∧ (Y ∨ Z)
Player II: X ⨁ Y ⨁ Z
Player III: X ∨ Y

These describe the utility table below.

0, 0 0, 1 1, 0 1, 1

0 0, 0, 0 0, 1, 0 0, 1, 1 0, 0, 1

1 0, 1, 1 1, 0, 1 1, 0, 1 1, 1, 1

The most flexible of way of representing a succinct game is by representing each player by a polynomial-time
bounded Turing machine, which takes as its input the actions of all players and outputs the player's utility. Such a
Turing machine is equivalent to a Boolean circuit, and it is this representation, known as circuit games, that we will
consider.
Computing the value of a 2-player zero-sum circuit game is an EXP-complete problem,[12] and approximating the
value of such a game up to a multiplicative factor is known to be in PSPACE.[13] Determining whether a pure Nash
Succinct game 56

equilibrium exists is an NP-complete problem.[14]

Other representations
Many other types of succinct game exist (many having to do with allocation of resources). Examples include
congestion games, network congestion games, scheduling games, local effect games, facility location games,
action-graph games, hypergraphical games and more.

Summary of complexities of finding equilibria


Below is a table of some known complexity results for finding certain classes of equilibria in several game
representations. "NE" stands for "Nash equilibrium", and "CE" for "correlated equilibrium". n is the number of
players and s is the number of strategies each player faces (we're assuming all players face the same number of
strategies). In graphical games, d is the maximum indegree of the game graph. For references, see main article text.

Representation Size (O(...)) Pure NE Mixed NE CE Optimal CE

Normal form game Linear PPAD-complete P P

Graphical game NP-complete PPAD-complete P NP-hard

Symmetric game NP-complete PPAD-complete P P

Anonymous game NP-hard P P

Polymatrix game PPAD-complete P NP-hard

Circuit game NP-complete

Congestion game PLS-complete P NP-hard

Notes
[1] Papadimitriou, Christos H. (2007). "The Complexity of Finding Nash Equilibria". In Nisan, Noam; Roughgarden, Tim; Tardos, Éva et al..
Algorithmic Game Theory. Cambridge University Press. pp. 29–52. ISBN 978-0-521-87282-9.
[2] Papadimitriou, Christos H.; Roughgarden, Tim (2008). "Computing Correlated Equilibria in Multi-Player Games" (http:/ / portal. acm. org/
citation. cfm?id=1379759. 1379762). J. ACM 55 (3): 1–29. doi:10.1145/1379759.1379762. . Retrieved 2010-01-23.
[3] Goldberg, Paul W.; Papadimitriou, Christos H. (2006). "Reducibility Among Equilibrium Problems" (http:/ / portal. acm. org/ citation.
cfm?id=1132516. 1132526). Proceedings of the thirty-eighth annual ACM symposium on Theory of computing. Seattle, WA, USA: ACM.
pp. 61–70. doi:10.1145/1132516.1132526. ISBN 1-59593-134-1. . Retrieved 2010-01-25.
[4] Gottlob, G.; Greco, G.; Scarcello, F. (2005). "Pure Nash Equilibria: Hard and Easy Games". Journal of Artificial Intelligence Research 24
(195-220): 26–37.
[5] Daskalakis, Constantinos; Fabrikant, Alex; Papadimitriou, Christos H. (2006). "The Game World Is Flat: The Complexity of Nash Equilibria
in Succinct Games". Automata, Languages and Programming. pp. 513–524. doi:10.1007/11786986_45.
[6] Chen, Xi; Deng, Xiaotie; Teng, Shang-Hua (2006). "Sparse Games Are Hard" (http:/ / www. springerlink. com/ content/ v2603131200h23hq/
). Internet and Network Economics. pp. 262–273. doi:10.1007/11944874_24. ISBN 978-3-540-68138-0. . Retrieved 2010-01-24.
[7] Cheng, Shih-Fen; Reeves, Daniel M.; Vorobeychik, Yevgeniy; Wellman, Michael P. (2004). "Notes on Equilibria in Symmetric Games".
AAMAS-04 Workshop on Game Theory and Decision Theory.
[8] Brandt, Felix; Fischer, Felix; Holzer, Markus (2009). "Symmetries and the Complexity of Pure Nash Equilibrium" (http:/ / portal. acm. org/
citation. cfm?id=1501295). J. Comput. Syst. Sci. 75 (3): 163–177. . Retrieved 2010-01-31.
[9] Papadimitriou, Christos H.; Roughgarden, Tim (2005). "Computing equilibria in multi-player games" (http:/ / portal. acm. org/ citation.
cfm?id=1070432. 1070444). Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms. Vancouver, British
Columbia: Society for Industrial and Applied Mathematics. pp. 82–91. ISBN 0-89871-585-7. . Retrieved 2010-01-25.
[10] Daskalakis, Constantinos; Papadimitriou, Christos H. (2007). "Computing Equilibria in Anonymous Games". arXiv:0710.5582v1 [cs].
[11] Howson, Joseph T. (January 1972). "Equilibria of Polymatrix Games". Management Science 18 (5): 312–318. ISSN 0025-1909.
JSTOR 2634798.
[12] Feigenbaum, Joan; Koller, Daphne; Shor, Peter (1995). "A Game-Theoretic Classification of Interactive Complexity Classes" (http:/ / portal.
acm. org/ citation. cfm?id=868345). Certer for Discrete Mathematics \& Theoretical Computer Science. . Retrieved 2010-01-25.
[13] Fortnow, Lance; Impagliazzo, Russell; Kabanets, Valentine; Umans, Christopher (2005). "On the Complexity of Succinct Zero-Sum
Games" (http:/ / portal. acm. org/ citation. cfm?id=1068661). Proceedings of the 20th Annual IEEE Conference on Computational Complexity.
Succinct game 57

IEEE Computer Society. pp. 323–332. ISBN 0-7695-2364-1. . Retrieved 2010-01-23.


[14] Schoenebeck, Grant; Vadhan, Salil (2006). "The computational complexity of nash equilibria in concisely represented games" (http:/ /
portal. acm. org/ citation. cfm?id=1134707. 1134737). Proceedings of the 7th ACM conference on Electronic commerce. Ann Arbor,
Michigan, USA: ACM. pp. 270–279. doi:10.1145/1134707.1134737. ISBN 1-59593-236-4. . Retrieved 2010-01-25.

External links
• Algorithmic Game Theory: The Computational Complexity of Pure Nash (http://agtb.wordpress.com/2009/11/
19/the-computational-complexity-of-pure-nash/)
58

Equilibrium Concepts

Trembling hand perfect equilibrium


(Normal form) trembling hand
perfect equilibrium
A solution concept in game theory

Relationships

Subset of Nash Equilibrium

Superset of Proper equilibrium

Significance

Proposed by Reinhard Selten

Trembling hand perfect equilibrium is a refinement of Nash Equilibrium due to Reinhard Selten. A trembling
hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by
assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with
negligible probability.

Definition
First we define a perturbed game. A perturbed game is a copy of a base game, with the restriction that only totally
mixed strategies are allowed to be played. A totally mixed strategy is a mixed strategy where every pure strategy is
played with non-zero probability. This is the "trembling hands" of the players; they sometimes play a different
strategy than the one they intended to play. Then we define a strategy set S (in a base game) as being trembling hand
perfect if there is a sequence of perturbed games that converge to the base game in which there is a series of Nash
equilibria that converge to S.

Example
The game represented in the following normal form matrix has two pure strategy Nash equilibria, namely <Up, Left>
and <Down, Right>. However, only <U,L> is trembling-hand perfect.

Left Right

Up 1, 1 2, 0
Down 0, 2 2, 2
Trembling hand perfect
equilibrium

Assume player 1 is playing a mixed strategy , for . Player 2's expected payoff from playing
L is:

Player 2's expected payoff from playing the strategy R is:


Trembling hand perfect equilibrium 59

For small values of ε, player 2 maximizes his expected payoff by placing a minimal weight on R and maximal
weight on L. By symmetry, player 1 should place a minimal weight on D if player 2 is playing the mixed strategy
. Hence <U,L> is trembling-hand perfect.
However, similar analysis fails for the strategy profile <D,R>.
Assume player 1 is playing a mixed strategy . Player 2's expected payoff from playing L is:

Player 2's expected payoff from playing R is:

For all positive values of ε, player 2 maximizes his expected payoff by placing a minimal weight on R and maximal
weight on L. Hence <D, R> is not trembling-hand perfect because player 2 (and, by symmetry, player 1) maximizes
his expected payoff by deviating most often to L if there is a small chance of error in the behavior of player 1.

Trembling hand perfect equilibria of two-player games


For two-player games, the set of trembling hand perfect equilibria coincides with the set of admissible equilibria, i.e.,
equilibria consisting of two undominated strategies. In the example above, we see that the imperfect equilibrium
<D,R> is not admissible, as L (weakly) dominates R for Player 2.

Trembling hand perfect equilibria of extensive form games

Extensive-form trembling hand perfect equilibrium


A solution concept in game theory

Relationships

Subset of Subgame perfect equilibrium, Perfect Bayesian equilibrium, Sequential equilibrium

Significance

Proposed by Reinhard Selten

Used for Extensive form games

There are two possible ways of extending the definition of trembling hand perfection to extensive form games.
• One may interpret the extensive form as being merely a concise description of a normal form game and apply the
concepts described above to this normal form game. In the resulting perturbed games, every strategy of the
extensive-form game must be played with non-zero probability. This leads to the notion of a normal-form
trembling hand perfect equilibrium.
• Alternatively, one may recall that trembles are to be interpreted as modelling mistakes made by the players with
some negligible probability when the game is played. Such a mistake would most likely consist of a player
making another move than the one intended at some point during play. It would hardly consist of the player
choosing another strategy than intended, i.e. a wrong plan for playing the entire game. To capture this, one may
define the perturbed game by requiring that every move at every information set is taken with non-zero
probability. Limits of equilibria of such perturbed games as the tremble probabilities goes to zero are called
extensive-form trembling hand perfect equilibria.
The notions of normal-form and extensive-form trembling hand perfect equilibria are incomparable, i.e., an
equilibrium of an extensive-form game may be normal-form trembling hand perfect but not extensive-form
trembling hand perfect and vice versa. As an extreme example of this, Jean-François Mertens has given an example
of a two-player extensive form game where no extensive-form trembling hand perfect equilibrium is admissible, i.e.,
the sets of extensive-form and normal-form trembling hand perfect equilibria for this game are disjoint.
Trembling hand perfect equilibrium 60

An extensive-form trembling hand perfect equilibrium is also a sequential equilibrium. A normal-form trembling
hand perfect equilibrium of an extensive form game may be sequential but is not necessarily so. In fact, a
normal-form trembling hand perfect equilibrium does not even have to be subgame perfect.

References
• Selten, R. (1975) A reexamination of the perfectness concept for equilibrium points in extensive games.
International Journal of Game Theory 4:25-55.
Proper equilibrium 61

Proper equilibrium
Proper equilibrium
A solution concept in game theory

Relationships

Subset of Trembling hand perfect equilibrium

Significance

Proposed by Roger B. Myerson

Proper equilibrium is a refinement of Nash Equilibrium due to Roger B. Myerson. Proper equilibrium further
refines Reinhard Selten's notion of a trembling hand perfect equilibrium by assuming that more costly trembles are
made with significantly smaller probability than less costly ones.

Definition
Given a normal form game and a parameter , a totally mixed strategy profile is defined to be -proper if,
whenever a player has two pure strategies s and s' such that the expected payoff of playing s is smaller than the
expected payoff of playing s' (that is ), then the probability assigned to s is at most
times the probability assigned to s'.
A strategy profile of the game is then said to be a proper equilibrium if it is a limit point, as approaches 0, of a
sequence of -proper strategy profiles.

Example
The game to the right is a variant of Matching Pennies.

Matching Pennies with a twist

Guess heads up Guess tails up Grab penny


Hide heads up -1, 1 0, 0 -1, 1
Hide tails up 0, 0 -1, 1 -1, 1

Player 1 (row player) hides a penny and if Player 2 (column player) guesses correctly whether it is heads up or tails
up, he gets the penny. In this variant, Player 2 has a third option: Grabbing the penny without guessing. The Nash
equilibria of the game are the strategy profiles where Player 2 grabs the penny with probability 1. Any mixed
strategy of Player 1 is in (Nash) equilibrium with this pure strategy of Player 2. Any such pair is even trembling hand
perfect. Intuitively, since Player 1 expects Player 2 to grab the penny, he is not concerned about leaving Player 2
uncertain about whether it is heads up or tails up. However, it can be seen that the unique proper equilibrium of this
game is the one where Player 1 hides the penny heads up with probability 1/2 and tails up with probability 1/2 (and
Player 2 grabs the penny). This unique proper equilibrium can be motivated intuitively as follows: Player 1 fully
expects Player 2 to grab the penny. However, Player 1 still prepares for the unlikely event that Player 2 does not grab
the penny and instead for some reason decides to make a guess. Player 1 prepares for this event by making sure that
Player 2 has no information about whether the penny is heads up or tails up, exactly as in the original Matching
Pennies game.
Proper equilibrium 62

Proper equilibria of extensive games


One may apply the properness notion to extensive form games in two different ways, completely analogous to the
two different ways trembling hand perfection is applied to extensive games. This leads to the notions of normal
form proper equilibrium and extensive form proper equilibrium of an extensive form game. It was shown by van
Damme that a normal form proper equilibrium of an extensive form game is behaviorally equivalent to a
quasi-perfect equilibrium of that game.

References
• Roger B. Myerson. Refinements of the Nash equilibrium concept. International Journal of Game Theory,
15:133-154, 1978.
• Eric van Damme. "A relationship between perfect equilibria in extensive form games and proper equilibria in
normal form games." International Journal of Game Theory 13:1--13, 1984.
Evolutionarily stable strategy 63

Evolutionarily stable strategy


Evolutionarily stable strategy
A solution concept in game theory

Relationships

Subset of Nash equilibrium

Superset of Stochastically stable equilibrium, Stable Strong Nash equilibrium

Intersects with Subgame perfect equilibrium, Trembling hand perfect equilibrium, Perfect Bayesian equilibrium

Significance

Proposed by John Maynard Smith and George R. Price

Used for Biological modeling and Evolutionary game theory

Example Hawk-dove

In game theory, behavioural ecology, and evolutionary psychology an evolutionarily stable strategy (ESS), which
is sometimes also called an evolutionary stable strategy, is a strategy which, if adopted by a population of players,
cannot be invaded by any alternative strategy that is initially rare. An ESS is an equilibrium refinement of the Nash
equilibrium. It is a Nash equilibrium that is "evolutionarily" stable: once it is fixed in a population, natural selection
alone is sufficient to prevent alternative (mutant) strategies from invading successfully.
The ESS was developed to define a class of solutions to game theoretic problems, equivalent to the Nash
equilibrium, which could be applied to the evolution of social behaviour in animals. Nash equilibria may exist due to
the application of rational foresight, but that does not apply in an evolutionary context. Teleological forces such as
rational foresight cannot explain the outcomes of trial-and-error processes, such as evolution, and thus have no place
in biological applications. The definition of an ESS excludes such Nash equilibria.
First developed in 1973, the ESS is widely used in behavioural ecology and economics, and has been used in
anthropology, evolutionary psychology, philosophy, and political science.

History
Evolutionarily stable strategies were defined and introduced by John Maynard Smith and George R. Price in a 1973
Nature paper.[1] Such was the time taken in peer-reviewing the paper for Nature that this was preceded by a 1972
essay by Maynard Smith in a book of essays titled On Evolution.[2] The 1972 essay is sometimes cited instead of the
1973 paper, but university libraries are much more likely to have copies of Nature. Papers in Nature are usually
short; in 1974, Maynard Smith published a longer paper in the Journal of Theoretical Biology.[3] Maynard Smith
explains further in his 1982 book Evolution and the Theory of Games.[4] Sometimes these are cited instead. In fact,
the ESS has become so central to game theory that often no citation is given, as the reader is assumed to be familiar
with it.
Maynard Smith mathematically formalised a verbal argument made by Price, which he read while peer-reviewing
Price's paper. When Maynard Smith realized that the somewhat disorganised Price was not ready to revise his article
for publication, he offered to add Price as co-author.
The concept was derived from R. H. MacArthur[5] and W. D. Hamilton's[6] work on sex ratios, derived from Fisher's
principle, especially Hamilton's (1967) concept of an unbeatable strategy. Maynard Smith was jointly awarded the
1999 Crafoord Prize for his development of the concept of evolutionarily stable strategies and the application of
game theory to the evolution of behaviour.[7]
Uses of ESS:
Evolutionarily stable strategy 64

• The ESS was a major element used to analyze evolution in Richard Dawkins' bestselling 1976 book The Selfish
Gene.
• The ESS was first used in the social sciences by Robert Axelrod in his 1984 book The Evolution of Cooperation.
Since then, it has been widely used in the social sciences, including anthropology, economics, philosophy, and
political science.
• In the social sciences, the primary interest is not in an ESS as the end of biological evolution, but as an end point
in cultural evolution or individual learning.[8]
• In evolutionary psychology, ESS is used primarily as a model for human biological evolution.

Motivation
The Nash equilibrium is the traditional solution concept in game theory. It depends on the cognitive abilities of the
players. It is assumed that players are aware of the structure of the game and consciously try to predict the moves of
their opponents and to maximize their own payoffs. In addition, it is presumed that all the players know this (see
common knowledge). These assumptions are then used to explain why players choose Nash equilibrium strategies.
Evolutionarily stable strategies are motivated entirely differently. Here, it is presumed that the players' strategies are
biologically encoded and heritable. Individuals have no control over their strategy and need not be aware of the
game. They reproduce and are subject to the forces of natural selection (with the payoffs of the game representing
reproductive success (biological fitness). It is imagined that alternative strategies of the game occasionally occur, via
a process like mutation. To be an ESS, a strategy must be resistant to these alternatives.
Given the radically different motivating assumptions, it may come as a surprise that ESSes and Nash equilibria often
coincide. In fact, every ESS corresponds to a Nash equilibrium, but some Nash equilibria are not ESSes.

Nash equilibria and ESS


An ESS is a refined or modified form of a Nash equilibrium. (See the next section for examples which contrast the
two.) In a Nash equilibrium, if all players adopt their respective parts, no player can benefit by switching to any
alternative strategy. In a two player game, it is a strategy pair. Let E(S,T) represent the payoff for playing strategy S
against strategy T. The strategy pair (S, S) is a Nash equilibrium in a two player game if and only if this is true for
both players and for all T≠S:
E(S,S) ≥ E(T,S)
In this definition, strategy T can be a neutral alternative to S (scoring equally well, but not better). A Nash
equilibrium is presumed to be stable even if T scores equally, on the assumption that there is no long-term incentive
for players to adopt T instead of S. This fact represents the point of departure of the ESS.
Maynard Smith and Price[1] specify two conditions for a strategy S to be an ESS. Either
1. E(S,S) > E(T,S), or
2. E(S,S) = E(T,S) and E(S,T) > E(T,T)
for all T≠S.
The first condition is sometimes called a strict Nash equilibrium.[9] The second is sometimes called "Maynard
Smith's second condition". The second condition means that although strategy T is neutral with respect to the payoff
against strategy S, the population of players who continue to play strategy S has an advantage when playing against
T.
There is also an alternative definition of ESS, which places a different emphasis on the role of the Nash equilibrium
concept in the ESS concept. Following the terminology given in the first definition above, we have (adapted from
Thomas, 1985):[10]
1. E(S,S) ≥ E(T,S), and
Evolutionarily stable strategy 65

2. E(S,T) > E(T,T)


for all T≠S.
In this formulation, the first condition specifies that the strategy is a Nash equilibrium, and the second specifies that
Maynard Smith's second condition is met. Note that the two definitions are not precisely equivalent: for example,
each pure strategy in the coordination game below is an ESS by the first definition but not the second.
In words, this definition looks like this: The payoff of the first player when both players play strategy S is higher
than (or equal to) the payoff of the first player when he changes to another strategy T and the second players keeps
his strategy S. *AND* The payoff of first player when only he changes his strategy to T is higher than his payoff in
case that both of players change their strategies to T.
This formulation more clearly highlights the role of the Nash equilibrium condition in the ESS. It also allows for a
natural definition of related concepts such as a weak ESS or an evolutionarily stable set.[10]

Examples of differences between Nash Equilibria and ESSes

Cooperate Defect A B

Cooperate 3, 3 1, 4 A 2, 1,
2 2
Defect 4, 1 2, 2 B 2, 2,
1 2
Prisoner's Dilemma Harm thy
neighbor

In most simple games, the ESSes and Nash equilibria coincide perfectly. For instance, in the Prisoner's Dilemma
there is only one Nash equilibrium, and its strategy (Defect) is also an ESS.
Some games may have Nash equilibria that are not ESSes. For example, in Harm thy neighbor both (A, A) and (B, B)
are Nash equilibria, since players cannot do better by switching away from either. However, only B is an ESS (and a
strong Nash). A is not an ESS, so B can neutrally invade a population of A strategists and predominate, because B
scores higher against B than A does against B. This dynamic is captured by Maynard Smith's second condition, since
E(A, A) = E(B, A), but it is not the case that E(A,B) > E(B,B).

C D Swerve Stay

C 2, 1, Swerve 0,0
2 2 -1,+1
D 2, 0,
1 0 Stay

Harm +1,-1-20,-20
everyone
Chicken

Nash equilibria with equally scoring alternatives can be ESSes. For example, in the game Harm everyone, C is an
ESS because it satisfies Maynard Smith's second condition. D strategists may temporarily invade a population of
'bstrategists by scoring equally well against C, but they pay a price when they begin to play against each other; C
scores better against D than does D. So here although E(C, C) = E(D, C), it is also the case that E(C,D) > E(D,D). As
a result C is an ESS.
Evolutionarily stable strategy 66

Even if a game has pure strategy Nash equilibria, it might be that none of those pure strategies are ESS. Consider the
Game of chicken. There are two pure strategy Nash equilibria in this game (Swerve, Stay) and (Stay, Swerve).
However, in the absence of an uncorrelated asymmetry, neither Swerve nor Stay are ESSes. There is a third Nash
equilibrium, a mixed strategy which is an ESS for this game (see Hawk-dove game and Best response for
explanation).
This last example points to an important difference between Nash equilibria and ESS. Nash equilibria are defined on
strategy sets (a specification of a strategy for each player), while ESS are defined in terms of strategies themselves.
The equilibria defined by ESS must always be symmetric, and thus have fewer equilibrium points.

ESS vs. Evolutionarily Stable State


In population biology, the two concepts of an evolutionarily stable strategy (ESS) and an evolutionarily stable state
are closely linked but describe different situations.
• In an evolutionarily stable strategy, if all the members of a population adopt it, no mutant strategy can invade.[4]
Once virtually all members of the population use this strategy, there is no 'rational' alternative. ESS is part of
classical game theory.
• In an evolutionarily stable state, a population's genetic composition will be restored by selection after a
disturbance, if the disturbance is not too large. An evolutionarily stable state is a dynamic property of a population
that returns to using a strategy, or mix of strategies, if it is perturbed from that initial state. It is part of population
genetics, dynamical system, or evolutionary game theory.
Thomas (1984)[11] applies the term ESS to an individual strategy which may be mixed, and evolutionarily stable
population state to a population mixture of pure strategies which may be formally equivalent to the mixed ESS.
Whether a population is evolutionarily stable does not relate to its genetic diversity: it can be genetically
monomorphic or polymorphic.[4]

Prisoner's dilemma and ESS


Cooperate Defect

Cooperate 3, 3 1, 4

Defect 4, 1 2, 2

Prisoner's Dilemma

A common model of altruism and social cooperation is the Prisoner's dilemma. Here a group of players would
collectively be better off if they could play Cooperate, but since Defect fares better each individual player has an
incentive to play Defect. One solution to this problem is to introduce the possibility of retaliation by having
individuals play the game repeatedly against the same player. In the so-called iterated Prisoner's dilemma, the same
two individuals play the prisoner's dilemma over and over. While the Prisoner's dilemma has only two strategies
(Cooperate and Defect), the iterated Prisoner's dilemma has a huge number of possible strategies. Since an individual
can have different contingency plan for each history and the game may be repeated an indefinite number of times,
there may in fact be an infinite number of such contingency plans.
Three simple contingency plans which have received substantial attention are Always Defect, Always Cooperate, and
Tit for Tat. The first two strategies do the same thing regardless of the other player's actions, while the later responds
on the next round by doing what was done to it on the previous round—it responds to Cooperate with Cooperate and
Defect with Defect.
Evolutionarily stable strategy 67

If the entire population plays Tit-for-Tat and a mutant arises who plays Always Defect, Tit-for-Tat will outperform
Always Defect. If the population of the mutant becomes too large — the percentage of the mutant will be kept small.
Tit for Tat is therefore an ESS, with respect to only these two strategies. On the other hand, an island of Always
Defect players will be stable against the invasion of a few Tit-for-Tat players, but not against a large number of
them.[12] If we introduce Always Cooperate, a population of Tit-for-Tat is no longer an ESS. Since a population of
Tit-for-Tat players always cooperates, the strategy Always Cooperate behaves identically in this population. As a
result, a mutant who plays Always Cooperate will not be eliminated. However, even though a population of Always
Cooperate and Tit-for-Tat can coexist, if there is a small percentage of the population that is Always Defect, the
selective pressure is against Always Cooperate, and in favour of Tit-for-Tat. This is due to the lower payoffs of
cooperating than those of defecting in case the opponent defects.
This demonstrates the difficulties in applying the formal definition of an ESS to games with large strategy spaces,
and has motivated some to consider alternatives.

ESS and human behavior


The fields of sociobiology and evolutionary psychology attempt to explain animal and human behavior and social
structures, largely in terms of evolutionarily stable strategies. Sociopathy (chronic antisocial or criminal behavior)
may be a result of a combination of two such strategies.[13]
Evolutionarily stable strategies were originally considered for biological evolution, but they can apply to other
contexts. In fact, there are stable states for a large class of adaptive dynamics. As a result, they can be used to explain
human behaviours that lack any genetic influences.

References
[1] Maynard Smith, J.; Price, G.R. (1973). "The logic of animal conflict". Nature 246 (5427): 15–8. Bibcode 1973Natur.246...15S.
doi:10.1038/246015a0.
[2] Maynard Smith, J. (1972). "Game Theory and The Evolution of Fighting". On Evolution. Edinburgh University Press. ISBN 0-85224-223-9.
[3] Maynard Smith, J. (1974). "The Theory of Games and the Evolution of Animal Conflicts". Journal of Theoretical Biology 47 (1): 209–21.
doi:10.1016/0022-5193(74)90110-6. PMID 4459582.
[4] Maynard Smith, John (1982). Evolution and the Theory of Games. ISBN 0-521-28884-3.
[5] MacArthur, R. H. (1965). Theoretical and mathematical biology. New York: Blaisdell.
[6] Hamilton, W.D. (1967). "Extraordinary sex ratios". Science 156 (3774): 477–88. doi:10.1126/science.156.3774.477. JSTOR 1721222.
PMID 6021675.
[7] Press release (http:/ / www. crafoordprize. se/ press/ arkivpressreleases/ thecrafoordprize1999. 5. 32d4db7210df50fec2d800018201. html) for
the 1999 Crafoord Prize
[8] Alexander, Jason McKenzie (23 May 2003). "Evolutionary Game Theory" (http:/ / plato. stanford. edu/ entries/ game-evolutionary/ ).
Stanford Encyclopedia of Philosophy. . Retrieved 31 August 2007.
[9] Harsanyi, J (1973). "Oddness of the number of equilibrium points: a new proof". Int. J. Game Theory 2 (1): 235–50.
doi:10.1007/BF01737572.
[10] Thomas, B. (1985). "On evolutionarily stable sets". J. Math. Biology 22: 105–115.
[11] Thomas, B. (1984). "Evolutionary stability: states and strategies". Theor. Pop. Biol. 26 (1): 49–67. doi:10.1016/0040-5809(84)90023-6.
[12] Axelrod, Robert (1984). The Evolution of Cooperation. ISBN 0-465-02121-2.
[13] Mealey, L. (1995). "The sociobiology of sociopathy: An integrated evolutionary model" (http:/ / www. bbsonline. org/ Preprints/
OldArchive/ bbs. mealey. html). Behavioral and Brain Sciences 18 (03): 523–99. doi:10.1017/S0140525X00039595. .
Evolutionarily stable strategy 68

Further reading
• Hines, WGS (1987). "Evolutionary stable strategies: a review of basic theory". Theoretical Population Biology 31
(2): 195–272. doi:10.1016/0040-5809(87)90029-3. PMID 3296292.
• Leyton-Brown, Kevin; Shoham, Yoav (2008). Essentials of Game Theory: A Concise, Multidisciplinary
Introduction (http://www.gtessentials.org). San Rafael, CA: Morgan & Claypool Publishers.
ISBN 978-1-598-29593-1. An 88-page mathematical introduction; see Section 3.8. Free online (http://www.
morganclaypool.com/doi/abs/10.2200/S00108ED1V01Y200802AIM003) at many universities.
• Parker, G.A. (1984) Evolutionary stable strategies. In Behavioural Ecology: an Evolutionary Approach (2nd ed)
Krebs, J.R. & Davies N.B., eds. pp 30–61. Blackwell, Oxford.
• Shoham, Yoav; Leyton-Brown, Kevin (2009). Multiagent Systems: Algorithmic, Game-Theoretic, and Logical
Foundations (http://www.masfoundations.org). New York: Cambridge University Press.
ISBN 978-0-521-89943-7. A comprehensive reference from a computational perspective; see Section 7.7.
Downloadable free online (http://www.masfoundations.org/download.html).
• John Maynard Smith. (1982) Evolution and the Theory of Games. ISBN 0-521-28884-3. Classic reference.

External links
• Evolutionarily Stable Strategies (http://www.animalbehavioronline.com/ess.html) at Animal Behavior: An
Online Textbook by Michael D. Breed.
• Game Theory and Evolutionarily Stable Strategies (http://www.holycross.edu/departments/biology/kprestwi/
behavior/ESS/ESS_index_frmset.html), Kenneth N. Prestwich's site at College of the Holy Cross.
• Evolutionarily stable strategies knol (http://knol.google.com/k/klaus-rohde/
evolutionarily-stable-strategies-and/xk923bc3gp4/50#)
Risk dominance 69

Risk dominance
Risk dominance
Payoff dominance
A solution concept in game theory

Relationships

Subset of Nash equilibrium

Significance

Proposed by John Harsanyi, Reinhard Selten

Used for Non-cooperative games

Example Stag hunt

Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept
in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant
if it is Pareto superior to all other Nash equilibria in the game.1 When faced with a choice among equilibria, all
players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the
other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of
attraction, meaning the more uncertainty players have about the actions of the other player(s), the more likely they
will choose the strategy corresponding to it.
The payoff matrix in Figure 1 provides a simple two-player, two-strategy example of a game with two pure Nash
equilibria. The strategy pair (Hunt, Hunt) is payoff dominant since payoffs are higher for both players compared to
the other pure NE, (Gather, Gather). On the other hand, (Gather, Gather) risk dominates (Hunt, Hunt) since if
uncertainty exists about the other player's action, gathering will provide a higher expected payoff. The game in
Figure 1 is a well-known game-theoretic dilemma called stag hunt. The rationale behind it is that communal action
(hunting) yields a higher return if all players combine their skills, but if it is unknown whether the other player helps
in hunting, gathering might turn out to be the better individual strategy for food provision, since it does not depend
on coordinating with the other player. In addition, gathering alone is preferred to gathering in competition with
others. Like the Prisoner's dilemma, it provides a reason why collective action might fail in the absence of credible
commitments.

Hunt Gather H G

Hunt 5, 5 0, 4 H A, a C, b
G B, c D, d
Gather 4, 0 2, 2 Fig. 2: Generic coordination
game
Fig. 1: Stag hunt
example
Risk dominance 70

Formal definition
The game given in Figure 2 is a coordination game if the following payoff inequalities hold for player 1 (rows): A >
B, D > C, and for player 2 (columns): a > b, d > c. The strategy pairs (H, H) and (G, G) are then the only pure Nash
equilibria. In addition there is a mixed Nash equilibrium where player 1 plays H with probability p = (d-c)/(a-b-c+d)
and G with probability 1–p; player 2 plays H with probability q = (D-C)/(A-B-C+D) and G with probability 1–q.
Strategy pair (H, H) payoff dominates (G, G) if A ≥ D, a ≥ d, and at least one of the two is a strict inequality: A > D
or a > d.
Strategy pair (G, G) risk dominates (H, H) if the product of the deviation losses is highest for (G, G) (Harsanyi and
Selten, 1988, Lemma 5.4.4). In other words, if the following inequality holds: (C – D)(c – d)≥(B – A)(b – a). If the
inequality is strict then (G, G) strictly risk dominates (H, H).2(That is, players have more incentive to deviate).
If the game is symmetric, so if A = a, B = b, etc., the inequality allows for a simple interpretation: We assume the
players are unsure about which strategy the opponent will pick and assign probabilities for each strategy. If each
player assigns probabilities ½ to H and G each, then (G, G) risk dominates (H, H) if the expected payoff from
playing G exceeds the expected payoff from playing H: ½ B + ½ D ≥ ½ A + ½ C, or simply B + D ≥ A + C.
Another way to calculate the risk dominant equilibrium is to calculate the risk factor for all equilibria and to find the
equilibrium with the smallest risk factor. To calculate the risk factor in our 2x2 game, consider the expected payoff
to a player if they play H: (where p is the probability that the other player will play
H), and compare it to the expected payoff if they play G: . The value of p which
makes these two expected values equal is the risk factor for the equilibrium (H, H), with the risk factor for
playing (G, G). You can also calculate the risk factor for playing (G, G) by doing the same calculation, but setting p
as the probability the other player will play G. An interpretation for p is it is the smallest probability that the
opponent will play a strategy such that the payoff of that strategy is greater than if the other strategy was played.

Equilibrium selection
A number of evolutionary approaches have established that when played in a large population, players might fail to
play the payoff dominant equilibrium strategy and instead end up in the payoff dominated, risk dominant
equilibrium. Two separate evolutionary models both support the idea that the risk dominant equilibrium is more
likely to occur. The first model, based on replicator dynamics, predicts that a population is more likely to adopt the
risk dominant equilibrium than the payoff dominant equilibrium. The second model, based on best response strategy
revision and mutation, predicts that the risk dominant state is the only stochastically stable equilibrium. Both models
assume that multiple two-player games are played in a population of N players. The players are matched randomly
with opponents, with each player having equal likelihoods of drawing any of the N−1 other players. The players start
with a pure strategy, G or H, and play this strategy against their opponent. In replicator dynamics, the population
game is repeated in sequential generations where subpopulations change based on the success of their chosen
stratregies. In best response, players update their strategies to improve expected payoffs in the subsequent
generations. The recognition of Kandori, Mailath & Rob (1993) and Young (1993) was that if the rule to update
one's strategy allows for mutation4, and the probability of mutation vanishes, i.e. asymptotically reaches zero over
time, the likelihood that the risk dominant equilibrium is reached goes to one, even if it is payoff dominated.3
Risk dominance 71

Notes
• 1  A single Nash equilibrium is trivially payoff and risk dominant if it is the only NE in the game.
• 2  Similar distinctions between strict and weak exist for most definitions here, but are not denoted explicitly
unless necessary.
• 3  Harsanyi and Selten (1988) propose that the payoff dominant equilibrium is the rational choice in the stag hunt
game, however Harsanyi (1995) retracted this conclusion to take risk dominance as the relevant selection
criterion.

References
• Samuel Bowles: Microeconomics: Behavior, Institutions, and Evolution, Princeton University Press, pp. 45–46
(2004) ISBN 0-691-09163-3
• Drew Fudenberg and David K. Levine: The Theory of Learning in Games, MIT Press, p. 27 (1999) ISBN
0-262-06194-5
• John C. Harsanyi: "A New Theory of Equilibrium Selection for Games with Complete Information", Games and
Econonmic Behavior 8, pp. 91–122 (1995)
• John C. Harsanyi and Reinhard Selten: A General Theory of Equilibrium Selection in Games, MIT Press (1988)
ISBN 0-262-08173-3
• Michihiro Kandori, George J. Mailath & Rafael Rob: "Learning, Mutation, and Long-run Equilibria in Games",
Econometrica 61, pp. 29–56 (1993) Abstract [1]
• Roger B. Myerson: Game Theory, Analysis of Conflict, Harvard University Press, pp. 118–119 (1991) ISBN
0-674-34115-5
• Larry Samuelson: Evolutionary Games and Equilibrium Selection, MIT Press (1997) ISBN 0-262-19382-5
• H. Peyton Young: "The Evolution of Conventions", Econometrica, 61, pp. 57–84 (1993) Abstract [2]
• H. Peyton Young: Individual Strategy and Social Structure, Princeton University Press (1998) ISBN
0-691-08687-7

References
[1] http:/ / econpapers. repec. org/ article/ ecmemetrp/ v_3A61_3Ay_3A1993_3Ai_3A1_3Ap_3A29-56. htm
[2] http:/ / econpapers. repec. org/ article/ ecmemetrp/ v_3a61_3ay_3a1993_3ai_3a1_3ap_3a57-84. htm
Self-confirming equilibrium 72

Self-confirming equilibrium
Self-confirming equilibrium
A solution concept in game theory

Relationships

Subset of Rationalizability

Superset of Nash equilibrium

Significance

Proposed by Drew Fudenberg and David K. Levine

Used for Extensive-form games

In game theory, self-confirming equilibrium is a generalization of Nash equilibrium for extensive form games, in
which players correctly predict the moves their opponents actually make, but may have misconceptions about what
their opponents would do at information sets that are never reached when the equilibrium is played. Informally,
self-confirming equilibrium is motivated by the idea that if a game is played repeatedly, the players will revise their
beliefs about their opponents' play if and only if they observe these beliefs to be wrong.
Consistent self-confirming equilibrium is a refinement of self-confirming equilibrium that further requires that
each player correctly predicts play at all information sets that can be reached when the player's opponents, but not
the player herself, deviate from their equilibrium strategies. Consistent self-confirming equilibrium is motivated by
learning models in which players are occasionally matched with "crazy" opponents, so that even if they stick to their
equilibrium strategy themselves, they eventually learn the distribution of play at all information sets that can be
reached if their opponents deviate.

References
• Drew Fudenberg and David K. Levine: "Self-confirming Equilibrium [1]", Econometrica 61:523-545, 1993.

References
[1] http:/ / www. dklevine. com/ papers/ sce. pdf
73

Strategies

Dominance
In game theory, strategic dominance (commonly called simply dominance) occurs when one strategy is better than
another strategy for one player, no matter how that player's opponents may play. Many simple games can be solved
using dominance. The opposite, intransitivity, occurs in games where one strategy may be better or worse than
another strategy for one player, depending on how the player's opponents may play.

Terminology
When a player tries to choose the "best" strategy among a multitude of options, that player may compare two
strategies A and B to see which one is better. The result of the comparison is one of:
• B dominates A: choosing B always gives at least as good an outcome as choosing A. There are 2 possibilities:
• B strictly dominates A: choosing B always gives a better outcome than choosing A, no matter what the other
player(s) do.
• B weakly dominates A: There is at least one set of opponents' action for which B is superior, and all other sets
of opponents' actions give B at least the same payoff as A.
• B and A are intransitive: B neither dominates, nor is dominated by, A. Choosing A is better in some cases, while
choosing B is better in other cases, depending on exactly how the opponent chooses to play. For example, B is
"throw rock" while A is "throw scissors" in Rock, Paper, Scissors.
• B is dominated by A: choosing B never gives a better outcome than choosing A, no matter what the other
player(s) do. There are 2 possibilities:
• B is weakly dominated by A: There is at least one set of opponents' actions for which B gives a worse
outcome than A, while all other sets of opponents' actions give A at least the same payoff as B. (Strategy A
weakly dominates B).
• B is strictly dominated by A: choosing B always gives a worse outcome than choosing A, no matter what the
other player(s) do. (Strategy A strictly dominates B).
This notion can be generalized beyond the comparison of two strategies.
• Strategy B is strictly dominant if strategy B strictly dominates every other possible strategy.
• Strategy B is weakly dominant if strategy B dominates all other strategies, but some are only weakly dominated.
• Strategy B is strictly dominated if some other strategy exists that strictly dominates B.
• Strategy B is weakly dominated if some other strategy exists that weakly dominates B.
Dominance 74

Mathematical definition
For any player , a strategy weakly dominates another strategy if
(With at least one that gives a strict inequality)
strictly dominates if

where represents the product of all strategy sets other than player 's

Dominance and Nash equilibria

C D
C 1, 1 0, 0
D 0, 0 0, 0

If a strictly dominant strategy exists for one player in a game, that player will play that strategy in each of the game's
Nash equilibria. If both players have a strictly dominant strategy, the game has only one unique Nash equilibrium.
However, that Nash equilibrium is not necessarily Pareto optimal, meaning that there may be non-equilibrium
outcomes of the game that would be better for both players. The classic game used to illustrate this is the Prisoner's
Dilemma.
Strictly dominated strategies cannot be a part of a Nash equilibrium, and as such, it is irrational for any player to play
them. On the other hand, weakly dominated strategies may be part of Nash equilibria. For instance, consider the
payoff matrix pictured at the right.
Strategy C weakly dominates strategy D. Consider playing C: If one's opponent plays C, one gets 1; if one's
opponent plays D, one gets 0. Compare this to D, where one gets 0 regardless. Since in one case, one does better by
playing C instead of D and never does worse, C weakly dominates D. Despite this, (D, D) is a Nash equilibrium.
Suppose both players choose D. Neither player will do any better by unilaterally deviating—if a player switches to
playing C, they will still get 0. This satisfies the requirements of a Nash equilibrium. Suppose both players choose C.
Neither player will do better by unilaterally deviating—if a player switches to playing D, they will get 0. This also
satisfies the requirements of a Nash equilibrium.

Iterated elimination of dominated strategies (IEDS)


The iterated elimination (or deletion) of dominated strategies is one common technique for solving games that
involves iteratively removing dominated strategies. In the first step, at most one dominated strategy is removed from
the strategy space of each of the players since no rational player would ever play these strategies. This results in a
new, smaller game. Some strategies—that were not dominated before—may be dominated in the smaller game. The
first step is repeated, creating a new even smaller game, and so on. It is possible that in any step 0 strategies may be
deleted for some players. The process stops when in any round 0 strategies are deleted. This process is valid since it
is assumed that rationality among players is common knowledge, that is, each player knows that the rest of the
players are rational, and each player knows that the rest of the players know that he knows that the rest of the players
are rational, and so on ad infinitum (see Aumann, 1976).
There are two versions of this process. One version involves only eliminating strictly dominated strategies. If, after
completing this process, there is only one strategy for each player remaining, that strategy set is the unique Nash
equilibrium.
Another version involves eliminating both strictly and weakly dominated strategies. If, at the end of the process,
there is a single strategy for each player, this strategy set is also a Nash equilibrium. However, unlike the first
Dominance 75

process, elimination of weakly dominated strategies may eliminate some Nash equilibria. As a result, the Nash
equilibrium found by eliminating weakly dominated strategies may not be the only Nash equilibrium. (In some
games, if we remove weakly dominated strategies in a different order, we may end up with a different Nash
equilibrium.)
In any case, if by iterated elimination of dominated strategies there is only one strategy left for each player, the game
is called a dominant solvable game.

References
• Fudenberg, Drew; Tirole, Jean (1993). Game Theory. MIT Press.
• Gibbons, Robert (1992). Game Theory for Applied Economists. Princeton University Press. ISBN 0-691-00395-5.
• Ginits, Herbert (2000). Game Theory Evolving. Princeton University Press. ISBN 0-691-00943-0.
• Leyton-Brown, Kevin; Shoham, Yoav (2008). Essentials of Game Theory: A Concise, Multidisciplinary
Introduction [1]. San Rafael, CA: Morgan & Claypool Publishers. ISBN 978-1-598-29593-1.. An 88-page
mathematical introduction; see Section 3.3. Free online [2] at many universities.
• Rapoport, A. (1966). Two-Person Game Theory: The Essential Ideas. University of Michigan Press.
• Jim Ratliff's Game Theory Course: Strategic Dominance [1]
• Shoham, Yoav; Leyton-Brown, Kevin (2009). Multiagent Systems: Algorithmic, Game-Theoretic, and Logical
Foundations [3]. New York: Cambridge University Press. ISBN 978-0-521-89943-7.. A comprehensive reference
from a computational perspective; see Sections 3.4.3, 4.5. Downloadable free online [4].
This article incorporates material from Dominant strategy on PlanetMath, which is licensed under the Creative
Commons Attribution/Share-Alike License.

References
[1] http:/ / www. virtualperfection. com/ gametheory/ Section2. 1. html
Strategy 76

Strategy
In game theory, a player's strategy in a game is a complete plan of action for whatever situation might arise; this
fully determines the player's behaviour. A player's strategy will determine the action the player will take at any stage
of the game, for every possible history of play up to that stage.
A strategy profile (sometimes called a strategy combination) is a set of strategies for each player which fully
specifies all actions in a game. A strategy profile must include one and only one strategy for every player.
The strategy concept is sometimes (wrongly) confused with that of a move. A move is an action taken by a player at
some point during the play of a game (e.g., in chess, moving white's Bishop a2 to b3). A strategy on the other hand
is a complete algorithm for playing the game, telling a player what to do for every possible situation throughout the
game.

Strategy set
A player's strategy set defines what strategies are available for them to play.
A player has a finite strategy set if they have a number of discrete strategies available to them. For instance, in a
single game of Rock-paper-scissors, each player has the finite strategy set {rock, paper, scissors}.
A strategy set is infinite otherwise. For instance, an auction with mandated bid increments may have an infinite
number of discrete strategies in the strategy set {$10, $20, $30, ...}. Alternatively, the Cake cutting game has a
bounded continuum of strategies in the strategy set {Cut anywhere between zero percent and 100 percent of the
cake}.
In a dynamic game, the strategy set consists of the possible rules a player could give to a robot or agent on how to
play the game. For instance, in the Ultimatum game, the strategy set for the second player would consist of every
possible rule for which offers to accept and which to reject.
In a Bayesian game, the strategy set is similar to that in a dynamic game. It consists of rules for what action to take
for any possible private information.

Choosing a strategy set


In applied game theory, the definition of the strategy sets is an important part of the art of making a game
simultaneously solvable and meaningful. The game theorist can use knowledge of the overall problem to limit the
strategy spaces, and ease the solution.
For instance, strictly speaking in the Ultimatum game a player can have strategies such as: Reject offers of ($1, $3,
$5, ..., $19), accept offers of ($0, $2, $4, ..., $20). Including all such strategies makes for a very large strategy space
and a somewhat difficult problem. A game theorist might instead believe they can limit the strategy set to: {Reject
any offer ≤ x, accept any offer > x; for x in ($0, $1, $2, ..., $20)}.

Pure and mixed strategies


A pure strategy provides a complete definition of how a player will play a game. In particular, it determines the
move a player will make for any situation he or she could face. A player's strategy set is the set of pure strategies
available to that player.
A mixed strategy is an assignment of a probability to each pure strategy. This allows for a player to randomly select
a pure strategy. Since probabilities are continuous, there are infinitely many mixed strategies available to a player,
even if their strategy set is finite.
Of course, one can regard a pure strategy as a degenerate case of a mixed strategy, in which that particular pure
strategy is selected with probability 1 and every other strategy with probability 0.
Strategy 77

A totally mixed strategy is a mixed strategy in which the player assigns a strictly positive probability to every pure
strategy. (Totally mixed strategies are important for equilibrium refinement such as trembling hand perfect
equilibrium.)

Mixed strategy

Illustration

A B

A 1, 1 0, 0

B 0, 0 1, 1

Pure coordination
game

Consider the payoff matrix pictured to the right (known as a coordination game). Here one player chooses the row
and the other chooses a column. The row player receives the first payoff, the column player the second. If row opts
to play A with probability 1 (i.e. play A for sure), then he is said to be playing a pure strategy. If column opts to flip
a coin and play A if the coin lands heads and B if the coin lands tails, then she is said to be playing a mixed strategy,
and not a pure strategy.

Significance
In his famous paper, John Forbes Nash proved that there is an equilibrium for every finite game. One can divide
Nash equilibria into two types. Pure strategy Nash equilibria are Nash equilibria where all players are playing pure
strategies. Mixed strategy Nash equilibria are equilibria where at least one player is playing a mixed strategy. While
Nash proved that every finite game has a Nash equilibrium, not all have pure strategy Nash equilibria. For an
example of a game that does not have a Nash equilibrium in pure strategies, see Matching pennies. However, many
games do have pure strategy Nash equilibria (e.g. the Coordination game, the Prisoner's dilemma, the Stag hunt).
Further, games can have both pure strategy and mixed strategy equilibria.

A disputed meaning
During the 1980s, the concept of mixed strategies came under heavy fire for being "intuitively problematic".[1]
Randomization, central in mixed strategies, lacks behavioral support. Seldom do people make their choices
following a lottery. This behavioral problem is compounded by the cognitive difficulty that people are unable to
generate random outcomes without the aid of a random or pseudo-random generator.[1]
In 1991,[2] game theorist Ariel Rubinstein described alternative ways of understanding the concept. The first, due to
Harsanyi (1973), [3] is called purification, and supposes that the mixed strategies interpretation merely reflects our
lack of knowledge of the players' information and decision-making process. Apparently random choices are then
seen as consequences of non-specified, payoff-irrelevant exogeneous factors. However, it is unsatisfying to have
results that hang on unspecified factors.[2]
A second interpretation imagines the game players standing for a large population of agents. Each of the agents
chooses a pure strategy, and the payoff depends on the fraction of agents choosing each strategy. The mixed strategy
hence represents the distribution of pure strategies chosen by each population. However, this does not provide any
justification for the case when players are individual agents.
Strategy 78

Later, Aumann and Brandenburger (1995), [4] re-interpreted Nash equilibrium as an equilibrium in beliefs, rather
than actions. For instance, in Rock-paper-scissors an equilibrium in beliefs would have each player believing the
other was equally likely to play each strategy. This interpretation weakens the predictive power of Nash equilibrium,
however, since it is possible in such an equilibrium for each player to actually play a pure strategy of Rock.
Ever since, game theorists' attitude towards mixed strategies-based results have been ambivalent. Mixed strategies
are still widely used for their capacity to provide Nash equilibria in games where no equilibrium in pure strategies
exists, but the model does not specify why and how players randomize their decisions.

References
[1] Aumann, R. (1985). "What is Game Theory Trying to accomplish?" (http:/ / www. ma. huji. ac. il/ raumann/ pdf/ what is game theory. pdf).
In Arrow, K.; Honkapohja, S.. Frontiers of Economics. Oxford: Basil Blackwell. pp. 909–924. .
[2] Rubinstein, A. (1991). "Comments on the interpretation of Game Theory". Econometrica 59 (4): 909–924. JSTOR 2938166.
[3] Harsanyi, John (1973), "Games with randomly disturbed payoffs: a new rationale for mixed-strategy equilibrium points", Int. J. Game Theory
2: 1–23, doi:10.1007/BF01737554
[4] Aumann, Robert; Brandenburger, Adam (1995), "Epistemic Conditions for Nash Equilibrium", Econometrica (The Econometric Society) 63
(5): 1161–1180, doi:10.2307/2171725, JSTOR 2171725

Tit for tat


Tit for tat is an English saying meaning "equivalent retaliation". It is
also a highly effective strategy in game theory for the iterated
prisoner's dilemma. It was first introduced by Anatol Rapoport in
Robert Axelrod's two tournaments, held around 1980. An agent using
this strategy will initially cooperate, then respond in kind to an
opponent's previous action. If the opponent previously was
cooperative, the agent is cooperative. If not, the agent is not. This is
similar to superrationality and reciprocal altruism in biology.

Overview
This strategy is dependent on four conditions, which have allowed it to In Western business cultures, a handshake when
become the most successful strategy for the iterated prisoner's meeting someone is an example of initial
dilemma:[1] cooperation

1. Unless provoked, the agent will always cooperate


2. If provoked, the agent will retaliate
3. The agent is quick to forgive
4. The agent must have a good chance of competing against the opponent more than once.
In the last condition, the definition of "good chance" depends on the payoff matrix of the prisoner's dilemma. The
important thing is that the competition continues long enough for repeated punishment and forgiveness to generate a
long-term payoff higher than the possible loss from cooperating initially.
A fifth condition applies to make the competition meaningful: if an agent knows that the next play will be the last, it
should naturally defect for a higher score. Similarly if it knows that the next two plays will be the last, it should
defect twice, and so on. Therefore the number of competitions must not be known in advance to the agents.
Generally, in game theory, effectiveness of a strategy is measured under the assumption that each player cares only
about him or herself. (Thus, the game-theory measure of effectiveness is impractical in many real life situations
where players do have a vested interest in, or an altruistic compassion towards, other players.) Furthermore,
Tit for tat 79

game-theory effectiveness is usually measured under the assumption of perfect communication, where it is assumed
that a player never misinterprets the intention of other players. By this game-theory definition of effectiveness tit for
tat was superior to a variety of alternative strategies, winning in several annual automated tournaments against
(generally far more complex) strategies created by teams of computer scientists, economists, and psychologists.
Some game theorists informally believe the strategy to be optimal, although no proof is presented.
In some competitions tit for tat was not the most effective strategy, even under the game-theory definition of
effectiveness. However, tit for tat would have been the most effective strategy if the average performance of each
competing team were compared. The team which recently won over a pure tit for tat team outperformed it with some
of their algorithms because they submitted multiple algorithms which would recognize each other and assume a
master and slave relationship (one algorithm would "sacrifice" itself and obtain a very poor result for the other
algorithm to be able to outperform tit for tat on an individual basis, but not as a pair or group). This "group" victory
illustrates one of the important limitations of the Prisoner's Dilemma in representing social reality, namely, that it
does not include any natural equivalent for friendship or alliances. The advantage of tit for tat thus pertains only to a
Hobbesian world of so-called rational solutions (with perfect communication), not to a world in which humans are
inherently social. However, that this winning solution does not work effectively against groups of agents running tit
for tat illustrates the strengths of tit for tat when employed in a team (that the team does better overall, and all the
agents on the team do well individually, when every agent cooperates).

Example of play
Cooperate Defect

Cooperate 3, 3 0, 5

Defect 5, 0 1, 1

Prisoner's dilemma
example

Assume there are four agents: two use the tit-for-tat strategy, and two are "defectors" who will simply try to
maximize their own winnings by always giving evidence against the other. Assume that each player faces the other
three over a series of six games. If one player gives evidence against a player who does not, the former gains 5 points
and the latter nets 0. If both refrain from giving evidence, both gain 3 points. If both give evidence against each
other, both gain 1 point.
When a tit-for-tat agent faces off against a defector, the former refrains from giving evidence in the first game while
the defector does the opposite, gaining the control 5 points. In the remaining 5 games, both players give evidence
against each other, netting 1 point each game. The defector scores a total of 10, and the tit-for-tat agent scores 5.
When the tit-for-tat agents face off against each other, each refrains from giving evidence in all six games. Both
agents win 3 points per game, for a total of 18 points each.
When the defectors face off, each gives evidence against the other in all six games. Both defectors win 1 point per
game, for a total of 6 points each.
Each tit-for-tat agent scores a total of 28 points (18 against the fellow tit-for-tat, 5 against each of the two defectors),
over the eighteen matches. Each defector scores only 26 points (6 against the fellow defector, 10 against each of the
tit-for-tats).
Despite the fact that the tit-for-tat agents never won a match and the defectors never lost a match, the tit-for-tat
strategy still came out ahead, because the final score is not determined by the number of match wins, but the total
points score. Simply put, the tit-for-tat agents gained more points tying with each other than they lost to the
Tit for tat 80

defectors.
The more tit-for-tat agents that there are in the described game, the more advantageous it is to use the tit-for-tat
strategy. The fewer tit-for-tat agents that there are in the described game, the less advantageous it is to use the
tit-for-tat strategy.

Implications
The success of the tit for tat strategy, which is largely cooperative despite that its name emphasizes an adversarial
nature, took many by surprise. In successive competitions various teams produced complex strategies which
attempted to "cheat" in a variety of cunning ways, but tit for tat eventually prevailed in every competition.
This result may give insight into how groups of animals (and particularly human societies) have come to live in
largely (or entirely) cooperative societies, rather than the individualistic "red in tooth and claw" way that might be
expected from individuals engaged in a Hobbesian state of nature. This, and particularly its application to human
society and politics, is the subject of Robert Axelrod's book The Evolution of Cooperation.

Problems
While Axelrod has empirically shown that the strategy is optimal in some cases, two agents playing tit for tat remain
vulnerable. A one-time, single-bit error in either player's interpretation of events can lead to an unending "death
spiral". In this symmetric situation, each side perceives itself as preferring to cooperate, if only the other side would.
But each is forced by the strategy into repeatedly punishing an opponent who continues to attack despite being
punished in every game cycle. Both sides come to think of themselves as innocent and acting in self-defense, and
their opponent as either evil or too stupid to learn to cooperate.
This situation frequently arises in real world conflicts, ranging from schoolyard fights to civil and regional wars. Tit
for two tats could be used to avoid this problem[2]
"Tit for tat with forgiveness" is sometimes superior. When the opponent defects, the player will occasionally
cooperate on the next move anyway. This allows for recovery from getting trapped in a cycle of defections. The
exact probability that a player will respond with cooperation depends on the line-up of opponents.
The reason for these issues is that tit for tat is not a subgame perfect equilibrium.[3] If one agent defects and the
opponent cooperates, then both agents will end up alternating cooperate and defect, yielding a lower payoff than if
both agents were to continually cooperate. While this subgame is not directly reachable by two agents playing tit for
tat strategies, a strategy must be a Nash equilibrium in all subgames to be subgame perfect. Further, this subgame
may be reached if any noise is allowed in the agents' signaling. A subgame perfect variant of tit for tat known as
"contrite tit for tat" may be created by employing a basic reputation mechanism.[4]

Tit for two tats


Tit for two tats is similar to tit for tat in that it is nice, retaliating, forgiving and non-envious, the only difference
between the two being how nice the strategy is.
In a tit for tat strategy, once an opponent defects, the tit for tat player immediately responds by defecting on the next
move. This has the unfortunate consequence of causing two retaliatory strategies to continuously defect against one
another resulting in a poor outcome for both players. A tit for two tats player will let the first defection go
unchallenged as a means to avoid the "death spiral" of the previous example. If the opponent defects twice in a row,
the tit for two tats player will respond by defecting.
This strategy was put forward by Robert Axelrod during his second round of computer simulations at RAND. After
analyzing the results of the first experiment, he determined that had a participant entered the tit for two tats strategy
it would have emerged with a higher cumulative score than any other program. As a result, he himself entered it with
Tit for tat 81

high expectations in the second tournament. Unfortunately, owing to the more aggressive nature of the programs
entered in the second round, which were able to take advantage of its highly forgiving nature, tit for two tats did
significantly worse (in the game-theory sense) than tit for tat.[5]

Real world use

Peer-to-peer file sharing


BitTorrent peers use tit-for-tat strategy to optimize their download speed.[6] More specifically, most BitTorrent peers
use a variant of Tit for two Tats which is called regular unchoking in BitTorrent terminology. BitTorrent peers have
a limited number of upload slots to allocate to other peers. Consequently, when a peer's upload bandwidth is
saturated, it will use a tit-for-tat strategy. Cooperation is achieved when upload bandwidth is exchanged for
download bandwidth. Therefore, when a peer is not uploading in return to our own peer uploading, the BitTorrent
program will choke the connection with the uncooperative peer and allocate this upload slot to a hopefully more
cooperating peer. regular unchoking corresponds very strongly to always cooperating on the first move in prisoner’s
dilemma. Periodically, a peer will allocate an upload slot to a randomly chosen uncooperative peer (unchoke). This is
called optimistic unchoking. This behavior allows searching for more cooperating peers and gives a second chance to
previously non-cooperating peers. The optimal threshold values of this strategy are still the subject of research.

Explaining reciprocal altruism in animal communities


Studies in the prosocial behaviour of animals, have led many ethologists and evolutionary psychologists to apply
tit-for-tat strategies to explain why altruism evolves in many animal communities. Evolutionary game theory,
derived from the mathematical theories formalised by von Neumann and Morgenstern (1953), was first devised by
Maynard Smith (1972) and explored further in bird behaviour by Robert Hinde. Their application of game theory to
the evolution of animal strategies launched an entirely new way of analysing animal behaviour.
Reciprocal altruism works in animal communities where the cost to the benefactor in any transaction of food, mating
rights, nesting or territory is less than the gains to the beneficiary. The theory also holds that the act of altruism
should be reciprocated if the balance of needs reverse. Mechanisms to identify and punish "cheaters" who fail to
reciprocate, in effect a form of tit for tat, is an important mechanism to regulate reciprocal altruism.

War
The tit for tat strategy has been detected by analysts in the spontaneous non-violent behaviour, called "live and let
live" that arose during trench warfare in the First World War. Troops dug in only a few hundred feet from each other
would evolve an unspoken understanding. If a sniper killed a soldier on one side, the other could expect an equal
retaliation. Conversely, if no one was killed for a time, the other side would acknowledge this implied "truce" and act
accordingly. This created a "separate peace" between the trenches.[7]
Tit for tat 82

Popular culture
This approach to interactions can be seen as a parallel to the eye for an eye approach from Judeo-Christian-Islamic
tradition, where the penalty for taking someone's eye is to lose one's own.

References
[1] Shaun Hargreaves Heap, Yanis Varoufakis (2004). Game theory: a critical text. Routledge. p. 191. ISBN 0415250943.
[2] Dawkins, Richard (1989). The Selfish Gene. Oxford University Press. ISBN 9780199291151.
[3] Gintis, Herbert (2000). Game Theory Evolving. Princeton University Press. ISBN 0691009430.
[4] Boyd, Robert (1989). "Mistakes Allow Evolutionary Stability in the Repeated Prisoner's Dilemma Game". Journal of Theoretical Biology 136
(1): 47–56. doi:10.1016/S0022-5193(89)80188-2. PMID 2779259.
[5] Axelrod, Robert (1984). The Evolution of Cooperation. Basic Books. ISBN 0465021212.
[6] Cohen, Bram (2003-05-22). "Incentives Build Robustness in BitTorrent" (http:/ / www. bittorrent. org/ bittorrentecon. pdf). BitTorrent.org. .
Retrieved 2011-02-05.
[7] Nice Guys Finish First. Richard Dawkins. BBC. 1986.

External links
• Wired magazine story about tit for tat being 'defeated' by a group of collaborating programs (http://www.wired.
com/news/culture/0,1284,65317,00.html)
• Explanation of Tit for tat on Australian Broadcasting Corporation (http://www2.owen.vanderbilt.edu/mike.
shor/Courses/GTheory/docs/Axelrod.html)
• Article on tit for tat and its success in evolutionary cooperation (http://journal.ilovephilosophy.com/Article/
Can-cooperation-every-occur-without-the-state-/1130)

Grim trigger
In game theory, grim trigger (also called the grim strategy or just grim) is a trigger strategy for a repeated game,
such as an iterated prisoner's dilemma. Initially, a player using grim trigger will cooperate, but as soon as the
opponent defects (thus satisfying the trigger condition), the player using grim trigger will defect for the remainder of
the iterated game. 'Since a single defect by the opponent triggers defection forever, grim trigger is the most strictly
unforgiving of strategies in an iterated game'.
In iterated prisoner's dilemma strategy competitions, grim trigger performs poorly even without noise, and adding
signal errors makes it even worse. Its ability to threaten permanent defection gives it a theoretically effective way to
sustain trust, but because of its unforgiving nature and the inability to communicate this threat in advance, it
performs poorly.[1]
In Robert Axelrod's book The Evolution of Cooperation, grim trigger is called "Friedman", for a 1971 paper by
James Friedman which uses the concept.[2]

References
[1] Axelrod, Robert (2000). "On Six Advances in Cooperation Theory" (http:/ / www. fordschool. umich. edu/ research/ papers/ PDFfiles/
00-003. pdf). . Retrieved 2007-11-02. (page 13)
[2] Friedman, James W. (1971). "A Non-cooperative Equilibrium for Supergames". Review of Economic Studies 38 (1): 1–12.
doi:10.2307/2296617.
Collusion 83

Collusion

Competition law
Basic concepts
• History of competition law
• Monopoly
• Coercive monopoly
• Natural monopoly
• Barriers to entry
• Herfindahl–Hirschman Index
• Market concentration
• Market power
• SSNIP test
• Relevant market
• Merger control
Anti-competitive practices
• Monopolization
• Collusion
• Formation of cartels
• Price fixing
• Bid rigging
• Product bundling and tying
• Refusal to deal
• Group boycott
• Essential facilities
• Exclusive dealing
• Dividing territories
• Conscious parallelism
• Predatory pricing
• Misuse of patents and copyrights
Enforcement authorities and organizations
• International Competition Network
• List of competition regulators

Collusion is an agreement between two or more persons, sometimes illegal and therefore secretive, to limit open
competition by deceiving, misleading, or defrauding others of their legal rights, or to obtain an objective forbidden
by law typically by defrauding or gaining an unfair advantage. It is an agreement among firms to divide the market,
set prices, or limit production.[1] It can involve "wage fixing, kickbacks, or misrepresenting the independence of the
relationship between the colluding parties".[2] In legal terms, all acts affected by collusion are considered void.[3]
Collusion 84

Definition
In the study of economics and market competition, collusion takes place within an industry when rival companies
cooperate for their mutual benefit. Collusion most often takes place within the market structure of oligopoly, where
the decision of a few firms to collude can significantly impact the market as a whole. Cartels are a special case of
explicit collusion. Collusion which is not overt, on the other hand, is known as tacit collusion.

Variations
According to neoclassical price-determination theory and game theory, the independence of suppliers forces prices to
their minimum, increasing efficiency and decreasing the price determining ability of each individual firm. However,
if firms collude to increase prices, loss of sales is minimized, as consumers lack alternative choices at lower prices.
This benefits the colluding firms at the cost of efficiency to society.
One variation of this traditional theory is the theory of kinked demand. Firms face a kinked demand curve if, when
one firm decreases its price, other firms will follow suit in order to maintain sales, and when one firm increases its
price, its rivals are unlikely to follow, as they would lose the sales' gains that they would otherwise get by holding
prices at the previous level. Kinked demand potentially fosters supra-competitive prices because any one firm would
receive a reduced benefit from cutting price, as opposed to the benefits accruing under neoclassical theory and
certain game theoretic models such as Bertrand competition.

Indicators
Practices that suggest collusion include:
• Uniform prices
• A penalty for price discounts
• Advance notice of price changes
• Information exchange

Examples
Collusion is largely illegal in the United States, Canada and most of the EU due to competition/antitrust law, but
implicit collusion in the form of price leadership and tacit understandings still takes place. Several examples of
collusion in the United States include:
• Market division and price-fixing among manufacturers of heavy electrical equipment in the 1960s, including
General Electric.[4]
• An attempt by Major League Baseball owners to restrict players' salaries in the mid-1980s.
• The sharing of potential contract terms by NBA free agents in an effort to help a targeted franchise circumvent the
salary cap
• Price fixing within food manufacturers providing cafeteria food to schools and the military in 1993.
• Market division and output determination of livestock feed additive, called lysine, by companies in the US, Japan
and South Korea in 1996, Archer Daniels Midland being the most notable of these.[5]
• Chip dumping in poker or any other high stake card game.
There are many ways that implicit collusion tends to develop:
• The practice of stock analyst conference calls and meetings of industry participants almost necessarily results in
tremendous amounts of strategic and price transparency. This allows each firm to see how and why every other
firm is pricing their products.
• If the practice of the industry causes more complicated pricing, which is hard for the consumer to understand
(such as risk-based pricing, hidden taxes and fees in the wireless industry, negotiable pricing), this can cause
Collusion 85

competition based on price to be meaningless (because it would be too complicated to explain to the customer in a
short advertisement). This causes industries to have essentially the same prices and compete on advertising and
image, something theoretically as damaging to consumers as normal price fixing.

Barriers
There can be significant barriers to collusion. In any given industry, these may include:
• The number of firms: As the number of firms in an industry increases, it is more difficult to successfully organize,
collude and communicate.
• Cost and demand differences between firms: If costs vary significantly between firms, it may be impossible to
establish a price at which to fix output.
• Cheating: There is considerable incentive to cheat on collusion agreements; although lowering prices might
trigger price wars, in the short term the defecting firm may gain considerably. This phenomenon is frequently
referred to as "chiseling".
• Potential entry: New firms may enter the industry, establishing a new baseline price and eliminating collusion
(though anti-dumping laws and tariffs can prevent foreign companies entering the market).
• Economic recession: An increase in average total cost or a decrease in revenue provides incentive to compete
with rival firms in order to secure a larger market share and increased demand.

References
[1] Sullivan, arthur; Steven M. Sheffrin (2003). Economics: Principles in action (http:/ / www. pearsonschool. com/ index.
cfm?locator=PSZ3R9& PMDbSiteId=2781& PMDbSolutionId=6724& PMDbCategoryId=& PMDbProgramId=12881& level=4). Upper
Saddle River, New Jersey 07458: Pearson Prentice Hall. pp. 171. ISBN 0-13-063085-3. .
[2] Collusion Law & Legal Definition (http:/ / definitions. uslegal. com/ c/ collusion/ )
[3] Collusion (http:/ / encarta. msn. com/ encyclopedia_761571249/ Collusion. html). Archived (http:/ / www. webcitation. org/ 5kwRA5eiX)
2009-10-31.
[4] Encyclopedia of white-collar & corporate crime (http:/ / books. google. com/ books?id=0f7yTNb_V3QC& pg=PA377& lpg=PA377&
dq=market+ division+ collusion+ heavy+ electrical+ equipment+ + 1960). .
[5] Hunter-Gault, Charlayne (October 15, 1996). "ADM: Who's Next?". MacNeil/Lehrer Newshour (PBS). http:/ / www. pbs. org/ newshour/ bb/
business/ october96/ adm_10-15. html. Retrieved on 2007-10-17.

• Vives, X. (1999) Oligopoly pricing, MIT Press, Cambridge MA (readable; suitable for advanced undergraduates.)
• Tirole, J. (1988) The Theory of Industrial Organization, MIT Press, Cambridge MA (An organized introduction
to industrial organization)
• Tirole, J. (1986), "Hierarchies and Bureaucracies", Journal of Law Economics and Organization, vol. 2,
pp. 181–214.
• Tirole, J. (1992), "Collusion and the Theory of Organizations", Advances in Economic Theory: Proceedings of
the Sixth World Congress of the Econometric Society, ed by J.-J. Laffont. Cambridge: Cambridge University
Press, vol.2:151-206.
Backward induction 86

Backward induction
Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to
determine a sequence of optimal actions. It proceeds by first considering the last time a decision might be made and
choosing what to do in any situation at that time. Using this information, one can then determine what to do at the
second-to-last time of decision. This process continues backwards until one has determined the best action for every
possible situation (i.e. for every possible information set) at every point in time.
In the mathematical optimization method of dynamic programming, backward induction is one of the main methods
for solving the Bellman equation.[1] [2] In game theory, backward induction is a method used to compute subgame
perfect equilibria in sequential games.[3] The only difference is that optimization involves just one decision maker,
who chooses what do at each point of time, whereas game theory analyzes how the decisions of several players
interact. That is, by anticipating what the last player will do in each situation, it is possible to determine what the
second-to-last player will do, and so on. In the related fields of automated planning and scheduling and automated
theorem proving, the method is called backward search or backward chaining. In chess it is called retrograde
analysis.
Backward induction has been used to solve games as long as the field of game theory has existed. John von
Neumann and Oskar Morgenstern suggested solving zero-sum, two-person games by backward induction in their
Theory of Games and Economic Behavior (1944), the book which established game theory as a field of study.[4] [5]

An example of decision-making by backward induction


Consider an unemployed person who will be able to work for ten more years t = 1,2,...,10. Suppose that each year in
which she remains unemployed, she may be offered a 'good' job that pays $100, or a 'bad' job that pays $44, with
equal probability (50/50). Once she accepts a job, she will remain in that job for the rest of the ten years. (Assume
for simplicity that she cares only about her monetary earnings, and that she values earnings at different times
equally, i.e., the discount rate is zero.)
Should this person accept bad jobs? To answer this question, we can reason backwards from time t = 10.
• At time 10, the value of accepting a good job is $100; the value of accepting a bad job is $44; the value of
rejecting the job that is available is zero. Therefore, if she is still unemployed in the last period, she should accept
whatever job she is offered at that time.
• At time 9, the value of accepting a good job is $200 (because that job will last for two years); the value of
accepting a bad job is 2*$44 = $88. The value of rejecting a job offer is $0 now, plus the value of waiting for the
next job offer, which will either be $44 with 50% probability or $100 with 50% probability, for an average
('expected') value of 0.5*($100+$44) = $72. Therefore regardless of whether the job available at time 9 is good or
bad, it is better to accept that offer than wait for a better one.
• At time 8, the value of accepting a good job is $300 (it will last for three years); the value of accepting a bad job
is 3*$44 = $132. The value of rejecting a job offer is $0 now, plus the value of waiting for a job offer at time 9.
Since we have already concluded that offers at time 9 should be accepted, the expected value of waiting for a job
offer at time 9 is 0.5*($200+$88) = $144. Therefore at time 8, it is more valuable to wait for the next offer than to
accept a bad job.
It can be verified by continuing to work backwards that bad offers should only be accepted if one is still unemployed
at times 9 or 10; they should be rejected at all times up to t = 8. The intuition is that if one expects to work in a job
for a long time, this makes it more valuable to be picky about what job to accept.
A dynamic optimization problem of this kind is called an optimal stopping problem, because the issue at hand is
when to stop waiting for a better offer. Search theory is the field of microeconomics that applies problems of this
type to contexts like shopping, job search, and marriage.
Backward induction 87

An example of backward induction in game theory


Consider the ultimatum game, where one player proposes to split a dollar with another. The first player (the
proposer) suggests a division of the dollar between the two players. The second player is then given the option to
either accept the split or reject it. If the second player accepts, both get the amount suggested by the proposer. If
rejected, neither receives anything.
Consider the actions of the second player given any arbitrary proposal by the first player (that gives the second
player more than zero). Since the only choice the second player has at each of these points in the game is to choose
between something and nothing, one can expect that the second will accept. Given that the second will accept all
proposals offered by the first (that give the second anything at all), the first ought to propose giving the second as
little as possible. This is the unique subgame perfect equilibrium of the Ultimatum Game. (However, the Ultimatum
Game does have several other Nash equilibria which are not subgame perfect.)
See also centipede game.

Backward induction and economic entry


Consider a dynamic game in which the players are an incumbent firm in an industry and a potential entrant to that
industry. As it stands, the incumbent has a monopoly over the industry and does not want to lose some of its market
share to the entrant. If the entrant chooses not to enter, the payoff to the incumbent is high (it maintains its
monopoly) and the entrant neither loses nor gains (its payoff is zero). If the entrant enters, the incumbent can "fight"
or "accommodate" the entrant. It will fight by lowering its price, running the entrant out of business (and incurring
exit costs — a negative payoff) and damaging its own profits. If it accommodates the entrant it will lose some of its
sales, but a high price will be maintained and it will receive greater profits than by lowering its price (but lower than
monopoly profits).
Say that, the best response of the incumbent is to accommodate if the entrant enters. If the incumbent accommodates,
the best response of the entrant is to enter (and gain profit). Hence the strategy profile in which the incumbent
accommodates if the entrant enters and the entrant enters if the incumbent accommodates is a Nash equilibrium.
However, if the incumbent is going to play fight, the best response of the entrant is to not enter. If the entrant does
not enter, it does not matter what the incumbent chooses to do (since there is no other firm to do it to — note that if
the entrant does not enter, fight and accommodate yield the same payoffs to both players; the incumbent will not
lower its prices if the entrant does not enter). Hence fight is a best response of the incumbent if the entrant does not
enter. Hence the strategy profile in which the incumbent fights if the entrant does not enter and the entrant does not
enter if the incumbent fights is a Nash equilibrium. Since the game is dynamic, any claim by the incumbent that it
will fight is not a credible threat because by the time the decision node is reached where it can decide to fight (i.e. the
entrant has entered), it would be irrational to do so. Therefore this Nash equilibrium can be eliminated by backward
induction.

A paradox of backward induction


The unexpected hanging paradox is a paradox related to backward induction. Suppose a prisoner is told that she will
be hanged sometime between Monday and Friday of next week. However, the exact day will be a surprise (i.e. she
will not know the night before that she will be executed the next day). The prisoner, interested in outsmarting her
executioner, attempts to determine which day the execution will occur.
She reasons that it cannot occur on Friday, since if it had not occurred by the end of Thursday, she would know the
execution would be on Friday. Therefore she can eliminate Friday as a possibility. With Friday eliminated, she
decides that it cannot occur on Thursday, since if it had not occurred on Wednesday, she would know that it had to
be on Thursday. Therefore she can eliminate Thursday. This reasoning proceeds until she has eliminated all
possibilities. She concludes that she will not be hanged next week.
Backward induction 88

To her surprise, she is hanged on Wednesday.


Here the prisoner reasons by backward induction, but seems to come to a false conclusion. Note, however, that the
description of the problem assumes it is possible to surprise someone who is performing backward induction. The
mathematical theory of backward induction does not make this assumption, so the paradox does not call into
question the results of this theory. Nonetheless, this paradox has received some substantial discussion by
philosophers.

Notes
[1] Jerome Adda and Russell Cooper, "Dynamic Economics: Quantitative Methods and Applications", Section 3.2.1, page 28. MIT Press, 2003.
[2] Mario Miranda and Paul Fackler, "Applied Computational Economics and Finance", Section 7.3.1, page 164. MIT Press, 2002.
[3] Drew Fudenberg and Jean Tirole, "Game Theory", Section 3.5, page 92. MIT Press, 1991.
[4] John von Neumann and Oskar Morgenstern, "Theory of Games and Economic Behavior", Section 15.3.1. Princeton University Press. Third
edition, 1953. (http:/ / www. archive. org/ details/ theoryofgamesand030098mbp) (First edition, 1944.)
[5] Mathematics of Chess (http:/ / www-groups. dcs. st-and. ac. uk/ ~history/ Projects/ MacQuarrie/ Chapters/ Ch4. html), webpage by John
MacQuarrie.

Markov strategy
In game theory, a Markov strategy is one that does not depend at all on state variables that are functions of the
history of the game, except those that affect payoffs.
In other words: a player playing a Markovian strategy, conditions his action of a period t only at his state at this
given period. The lack of dependence upon the history is known in the theory of stochastic processes as Markov
property.
89

Game Classes

Symmetric game
In game theory, a symmetric game is a game where the payoffs for playing a particular strategy depend only on the
other strategies employed, not on who is playing them. If one can change the identities of the players without
changing the payoff to the strategies, then a game is symmetric. Symmetry can come in different varieties.
Ordinally symmetric games are games that are symmetric with respect to the ordinal structure of the payoffs. A
game is quantitatively symmetric if and only if symmetric with respect to the exact payoffs.

Symmetry in 2x2 games

E F
E a, a b, c
F c, b d, d

Only 12 out the 144 ordinally distinct 2x2 games are symmetric. However, many of the commonly studied 2x2
games are at least ordinally symmetric. The standard representations of chicken, the Prisoner's Dilemma, and the
Stag hunt are all symmetric games. Formally, in order for a 2x2 game to be symmetric, its payoff matrix must
conform to the schema pictured to the right.
The requirements for a game to be ordinally symmetric are weaker, there it need only be the case that the ordinal
ranking of the payoffs conform to the schema on the right.

Symmetry and equilibria


Nash (1951) shows that every symmetric game has a symmetric mixed strategy Nash equilibrium. Cheng et al.
(2004) show that every two-strategy symmetric game has a (not necessarily symmetric) pure strategy Nash
equilibrium.

Uncorrelated asymmetries: payoff neutral asymmetries


Symmetries here refer to symmetries in payoffs. Biologists often refer to asymmetries in payoffs between players in
a game as correlated asymmetries. These are in contrast to uncorrelated asymmetries which are purely informational
and have no effect on payoffs (e.g. see Hawk-dove game).
Symmetric game 90

The general case


Dasgupta and Maskin consider games where where is the payoff
function for player and is player 's strategy set. Then the game is defined to be
symmetric if for any permutation ,

References
• Shih-Fen Cheng, Daniel M. Reeves, Yevgeniy Vorobeychik and Michael P. Wellman. Notes on Equilibria in
Symmetric Games, International Joint Conference on Autonomous Agents & Multi Agent Systems, 6th
Workshop On Game Theoretic And Decision Theoretic Agents, New York City, NY, August 2004. [1]
• Symmetric Game [2] at Gametheory.net [3]
• P. Dasgupta and E. Maskin 1986. "The existence of equilibrium in discontinuous economic games, I: Theory".
The Review of Economic Studies, 53(1):1-26
• John Nash. "Non-cooperative Games". "The Annals of Mathematics", 2nd Ser., 54(2):286-295, September 1951.

Further reading
• David Robinson; David Goforth (2005). The topology of the 2x2 games: a new periodic table. Routledge.
ISBN 9780415336093.

References
[1] http:/ / www. sci. brooklyn. cuny. edu/ ~parsons/ events/ gtdt/ gtdt04/ reeves. pdf
[2] http:/ / www. gametheory. net/ dictionary/ Games/ SymmetricGame. html
[3] http:/ / www. gametheory. net
Perfect information 91

Perfect information
In game theory, perfect information describes the situation when a player has available the same information to
determine all of the possible games (all combinations of legal moves) as would be available at the end of the game.
In game theory, a game is described as a game of perfect information if perfect information is available for all
moves. Chess is an example of a game with perfect information as each player can see all of the pieces on the board
at all times. Other examples of perfect games include tic tac toe, irensei, and go. Games with perfect information
represent a small subset of games. Card games where each player's cards are hidden from other players are examples
of games of imperfect information.[1]
In microeconomics, a state of perfect information is assumed in some models of perfect competition. That is,
assuming that all agents are rational and have perfect information, they will choose the best products, and the market
will reward those who make the best products with higher sales. Perfect information would practically mean that all
consumers know all things, about all products, at all times (including knowing the probabilistic outcome of all future
events) , and therefore always make the best decision regarding purchase. In competitive markets, unlike
game-theoretic models, perfect competition does not require that agents have complete knowledge about the actions
of others; all relevant information is reflected in prices.

References
[1] Thomas, L. C. (2003). Games, Theory and Applications. Mineola N.Y.: Dover Publications. pp. 19. ISBN 0-486-43237-8.

Further reading
• Fudenberg, D. and Tirole, J. (1993) Game Theory, MIT Press. (see Chapter 3, sect 2.2)
• Gibbons, R. (1992) A primer in game theory, Harvester-Wheatsheaf. (see Chapter 2)
• Luce, R.D. and Raiffa, H. (1957) Games and Decisions: Introduction and Critical Survey, Wiley & Sons (see
Chapter 3, section 2)
• The Economics of Groundhog Day (http://www.mises.org/story/2289) by economist D.W. MacKenzie, using
the 1993 film Groundhog Day to argue that perfect information, and therefore perfect competition, is impossible.
Simultaneous game 92

Simultaneous game
In game theory, a simultaneous game is a game where each player chooses his action without knowledge of the
actions chosen by other players. Normal form representations are usually used for simultaneous games.
Prisoner dilemma is an example of simultaneous game.

Sequential game
In game theory, a sequential game is a game where one player chooses his action before the others choose theirs.
Importantly, the later players must have some information of the first's choice, otherwise the difference in time
would have no strategic effect. Extensive form representations are usually used for sequential games, since they
explicitly illustrate the sequential aspects of a game.
Combinatorial games are usually sequential games.
Sequential games are often solved by backward induction.

Repeated game
In game theory, a repeated game (supergame or iterated game) is an extensive form game which consists in some
number of repetitions of some base game (called a stage game). The stage game is usually one of the well-studied
2-person games. It captures the idea that a player will have to take into account the impact of his current action on
the future actions of other players; this is sometimes called his reputation. The presence of different equilibrium
properties is because the threat of retaliation is real, since one will play the game again with the same person. It can
be proved that every strategy that has a payoff greater than the minmax payoff can be a Nash Equilibrium, which is a
very large set of strategies. Single stage game or single shot game are names for non-repeated games.

Finitely vs infinitely repeated games


Repeated games may be broadly divided into two classes, depending on whether the horizon is finite or infinite. The
results in these two cases are very different. Even finitely repeated games are not necessarily finite horizon, the
player may just perceive a probability of another cycle and act accordingly. For example, the fact that everyone has a
fixed lifetime doesn't mean that all games should be finite horizon. Also, players might act differently when the
horizon is far away as opposed to when it is close by, which can probably be thought of as a time modifier function
applied to the payoff. The difference in strategies for finite versus infinite horizon games is a hotly debated topic,
and many game theorists have differing views regarding it.

Infinitely repeated games


The most widely studied repeated games are games that are repeated a possibly infinite number of times. On many
occasions, it is found that the optimal method of playing a repeated game is not to repeatedly play a Nash strategy of
the constituent game (look at the Repeated prisoner's dilemma example), but to cooperate and play a socially
optimum strategy. This can be interpreted as a "social norm" and one essential part of infinitely repeated games is
punishing players who deviate from this cooperative strategy. The punishment may be something like playing a
strategy which leads to reduced payoff to both players for the rest of the game (called a trigger strategy). There are
many results in theorems which deal with how to achieve and maintain a socially optimal equilibrium in repeated
games. These results are collectively called "Folk Theorems". An important feature of a repeated game is the way in
Repeated game 93

which a player's preferences may be modeled. There are many different ways in which a preference relation may be
modeled in an infinitely repeated game, the main ones are :
• Discounting - valuation of the game diminishes with time depending on the discount parameter

• Limit of means - can be thought of as an average over T periods as T approaches infinity.

• Overtaking - Sequence is superior to sequence if

Robert Aumann's Blackmailer Paradox appears to be a repeated game in which the ultimatum game is played many
times by the same players for high stakes.

Finitely repeated games


As explained earlier, finite games can be divided into two broad classes. In the first class of finitely repeated games
where the time period is fixed and known, it is optimal to play the Nash strategy in the last period. When the Nash
Equilibrium payoff is equal to the minmax payoff, then the player has no reason to stick to a socially optimum
strategy and is free to play a selfish strategy throughout, since the punishment cannot affect him (being equal to the
minmax payoff). This deviation to a selfish Nash Equilibrium strategy is explained by the Chainstore paradox. The
second class of finitely repeated games are usually thought of as infinitely repeated games.

Repeated prisoner's dilemma


Although the Prisoner's dilemma has only one Nash equilibrium (everyone defect), cooperation can be sustained in
the repeated Prisoner's dilemma if the discount factor is not too low, that is if the players are interested enough in
future outcomes of the game. Strategies known as trigger strategies comprise Nash equilibria of the repeated
Prisoner's dilemma. However, Prisoner's dilemma is one where the minmax value is equal to the Nash Equilbrium
payoff. This means that a player who knows the exact horizon may just decide to switch to Defect without fear of
punishment.
An example of repeated prisoner's dilemma is the WWI trench warfare. Here, though initially it was best to cause as
much damage to the other party as possible, as time passed and the opposing parties got to 'know' each other, they
realised that causing as much damage as possible to the other by, e.g. artillery will only prompt a similar response:
e.g. blowing up the foodstock of the other (through bombardment) will only leave both battalions hungry. After
some time, the opposing battalions learned that it is sufficient to show what they are capable of, instead of actually
carrying out the act.

Solving repeated games


Complex repeated games can be solved using various techniques most of which rely heavily on linear algebra and
the concepts expressed in fictitious play.

References
• Fudenberg, Drew; Tirole, Jean (1991). Game Theory. Cambridge: MIT Press. ISBN 0262061414.
• Mailath, G. & Samuelson, L. (2006). Repeated games and reputations: long-run relationships. New York: Oxford
University Press. ISBN 0195300793.
• Osborne, Martin J.; Rubinstein, Ariel (1994). A Course in Game Theory. Cambridge: MIT Press.
ISBN 0262150417.
• Sorin, Sylvain (2002). A First Course on Zero-Sum Repeated Games. Berlin: Springer. ISBN 3540430288.
Repeated game 94

External links
• Game-Theoretic Solution to Poker Using Fictitious Play [1]
• Game Theory notes on Repeated games [2]

References
[1] http:/ / www. dudziak. com/ poker. php
[2] http:/ / wiki. cc. gatech. edu/ theory/ index. php/ Repeated_games

Signaling games
A signaling game is a dynamic, Bayesian game with two players,
the sender (S) and the receiver (R). The sender has a certain type,
t, which is given by nature. The sender observes his own type
while the receiver does not know the type of the sender. Based on
his knowledge of his own type, the sender chooses to send a
message from a set of possible messages M = {m1, m2, m3,..., mj}.
The receiver observes the message but not the type of the sender.
Then the receiver chooses an action from a set of feasible actions
A = {a1, a2, a3,...., ak}. The two players receive payoffs dependent
on the sender's type, the message chosen by the sender and the
An extensive form representation of a signaling game
action chosen by the receiver.[1] [2] A related game is a screening
game where rather than choosing an action based on a signal, the
receiver gives the sender proposals based on the type of the sender, which the sender has some control over.

Signaling games were introduced by In-Koo Cho and David M. Kreps in a 1987 article.[3]

Costly versus cost-free signaling


One of the major uses of signaling games both in economics and biology has been to determine under what
conditions honest signaling can be an equilibrium of the game. That is, under what conditions can we expect rational
people or animals subject to natural selection to reveal information about their types?
If both parties have coinciding interest, that is they both prefer the same outcomes in all situations, then honesty is an
equilibrium. (Although in most of these cases non-communicative equilbria exist as well.) However, if the parties'
interests do not perfectly overlap, then the maintenance of informative signaling systems raises an important
problem.
Consider a circumstance described by John Maynard Smith regarding transfer between related individuals. Suppose
a signaler can be either starving or just hungry, and she can signal that fact to another individual which has food.
Suppose that she would like more food regardless of her state, but that the individual with food only wants to give
her the food if she is starving. While both players have identical interests when the signaler is starving, they have
opposing interests when she is only hungry. When the signaler is hungry she has an incentive to lie about her need in
order to obtain the food. And if the signaler regularly lies, then the receiver should ignore the signal and do whatever
he thinks best.
Determining how signaling is stable in these situations has concerned both economists and biologists, and both have
independently suggested that signal cost might play a role. If sending one signal is costly, it might only be worth the
cost for the starving person to signal. The analysis of when costs are necessary to sustain honesty has been a
significant area of research in both these fields.
Signaling games 95

Perfect Bayesian equilibrium


The equilibrium concept that is relevant for signaling games is Perfect Bayesian equilibrium. Perfect Bayesian
equilibrium is a refinement of Bayesian Nash equilibrium, which is an extension of Nash equilibrium to games of
incomplete information. Perfect Bayesian equilibrium is the equilibrium concept relevant for dynamic games of
incomplete information.

Definition of perfect Bayesian equilibrium of the signaling game


A sender of type sends a message in the set of probability distributions over M. ( represents the
probabilities that type will take any of the messages in M.) The receiver observing the message m takes an action
in the space of probability distributions over A.

Requirement 1
The receiver must have a belief about which types can have sent message m. These beliefs can be described as a
probability distribution , the probability that the sender has type if he chooses message . The sum
over all types of these probabilities has to be 1 conditional on any message m.

Requirement 2
The action the receiver chooses must maximize the expected utility of the receiver given his beliefs about which type
could have sent message , . This means that the sum

is maximized. The action that maximizes this sum is .

Requirement 3
For each type, , the sender may have, the sender chooses to send the message that maximizes the sender's
utility given the strategy chosen by the receiver, .

Requirement 4
For each message the sender can send, if there exists a type such that assigns strictly positive
probability to m (i.e. for each message which is sent with positive probability), the belief the receiver has about the
type of the sender if he observes message , satisfies the equation (Bayes rule)

The perfect Bayesian equilibria in such a game can be divided in three different categories, pooling equilibria,
semi-pooling (also called semi-separating), and separating equilibria. A pooling equilibrium is an equilibrium where
senders with different types all choose the same message. A semi-pooling equilibrium is an equilibrium where some
types of senders choose the same message and other types choose different messages. A separating equilibrium is an
equilibrium where senders with different types always choose different messages. Therefore, if there are more types
of actors than there are messages, the equilibrium can never be a separating equilibrium (but may be semi-separating
equilibria).
Signaling games 96

Applications of signaling games


Signaling games describe situations where one player has information the other player does not have. These
situations of asymmetric information are very common in economics and behavioral biology.

Philosophy
The first known use of signaling games occurs in David K. Lewis' Ph. D. dissertation (and later book) Convention.[4]
Replying to W.V.O. Quine,[5] [6] Lewis attempts to develop a theory of convention and meaning using signaling
games. In his most extreme comments, he suggests that understanding the equilibrium properties of the appropriate
signaling game captures all there is to know about meaning:
I have now described the character of a case of signaling without mentioning the meaning of the signals: that
two lanterns meant that the redcoats were coming by sea, or whatever. But nothing important seems to have
been left unsaid, so what has been said must somehow imply that the signals have their meanings.[7]
The use of signaling games has been continued in the philosophical literature. Other have used evolutionary models
of signaling games to describe the emergence of language. Work on the emergence of language in simple signaling
games includes models by Huttegger,[8] Grim, et al.,[9] Skyrms,[10] [11] and Zollman.[12] Harms,[13] [14] and
Huttegger,[15] have attempted to extend the study to include the distinction between normative and descriptive
language.

Economics
The first application of signaling games to economic problems was Michael Spence's model of job market
signaling.[16] Spence describes a game where workers have a certain ability (high or low) that the employer does not
know. The workers send a signal by their choice of education. The cost of the education is higher for a low ability
worker than for a high ability worker. The employers observe the workers' education but not their ability, and choose
to offer the worker a high or low wage. In this model it is assumed that the level of education does not cause the high
ability of the worker, but rather, only workers with high ability are able to attain a specific level of education without
it being more costly than their increase in wage. In other words, the benefits of education are only greater than the
costs for workers with a high level of ability, so only workers with a high ability will get an education.

Biology
Valuable advances have been made by applying signaling games to a number of biological questions. Most notably,
Alan Grafen's (1990) handicap model of mate attraction displays.[17] The antlers of stags, the elaborate plumage of
peacocks and birds of paradise, and the song of the nightingale are all such signals. Grafen’s analysis of biological
signaling is formally similar to the classic monograph on economic market signaling by Michael Spence.[18] More
recently, a series of papers by Getty[19] [20] [21] [22] shows that Grafen’s analysis, like that of Spence, is based on the
critical simplifying assumption that signalers trade off costs for benefits in an additive fashion, the way humans
invest money to increase income in the same currency. This assumption that costs and benefits trade off in an
additive fashion might be valid for some biological signaling systems, but is not valid for multiplicative tradeoffs,
such as the survival cost – reproduction benefit tradeoff that is assumed to mediate the evolution of sexually selected
signals.
Charles Godfray (1991) modeled the begging behavior of nestling birds as a signaling game.[23] The nestlings
begging not only informs the parents that the nestling is hungry, but also attracts predators to the nest. The parents
and nestlings are in conflict. The nestlings benefit if the parents work harder to feed them than the parents ultimate
benefit level of investment. The parents are trading off investment in the current nestlings against investment in
future offspring.
Signaling games 97

Pursuit deterrent signals have been modeled as signaling games.[24] Thompson's gazelles are known sometimes to
perform a 'stott', a jump into the air of several feet with the white tail showing, when they detect a predator. Alcock
and others have suggested that this action is a signal of the gazelle's speed to the predator. This action successfully
distinguishes types because it would be impossible or too costly for a sick creature to perform and hence the predator
is deterred from chasing a stotting gazelle because it is obviously very agile and would prove hard to catch.

References
[1] Gibbons, Robert (1992). A Primer in Game Theory. New York: Harvester Wheatsheaf. ISBN 0745011594.
[2] Osborne, M. J. & Rubenstein, A. (1994). A Course in Game Theory. Cambridge: MIT Press. ISBN 0262650401.
[3] "Cho, I-K. & Kreps, D. M. (1987) Signaling games and stable equilibria. Quarterly Journal of Economics 102:179-221."
[4] Lewis, D. (1969). Convention. A Philosophical Study. Cambridge: Harvard University Press.
[5] Quine, W. V. O. (1936). "Truth by Convention". Philosophical Essays for Alfred North Whitehead. London: Longmans, Green & Co..
pp. 90–124. ISBN 0846209705. (Reprinting)
[6] Quine, W. V. O. (1960). "Carnap and Logical Truth". Synthese 12 (4): 350–374. doi:10.1007/BF00485423.
[7] Lewis (1969), p. 124.
[8] Huttegger, S. M. (2007). "Evolution and the Explanation of Meaning". Philosophy of Science 74 (1): 1–24. doi:10.1086/519477.
[9] Grim, P.; Kokalis, T.; Kilb, N.; St. Denis, Paul (2001). "Making Meaning Happen". Technical Report #01-02. Stony Brook: Group for Logic
and Formal Semantics SUNY, Stony Brook.
[10] Skyrms, B. (1996). Evolution of the Social Contract. Cambridge: Cambridge University Press. ISBN 0521554713.
[11] Skyrms, B. (2000). "Stability and Explanatory Significance of Some Simple Evolutionary Models". Philosophy of Science 67 (1): 94–113.
JSTOR 188615.
[12] Zollman, K. J. S. (2005). "Talking to Neighbors: The Evolution of Regional Meaning". Philosophy of Science 72 (1): 69–85.
doi:10.1086/428390.
[13] Harms, W. F. (2000). "Adaption and Moral Realism". Biology and Philosophy 15 (5): 699–712. doi:10.1023/A:1006661726993.
[14] Harms, W. F. (2004). Information and Meaning in Evolutionary Processes. Cambridge: Cambridge University Press. ISBN 0521815142.
[15] Huttegger, S. M. (2005). "Evolutionary Explanations of Indicatives and Imperatives". Erkenntnis 66 (3): 409–436.
doi:10.1007/s10670-006-9022-1.
[16] Spence, A. M. (1973). "Job Market Signaling". Quarterly Journal of Economics 87 (3): 355–374. doi:10.2307/1882010.
[17] Grafen, A. (1990). "Biological signals as handicaps". Journal of Theoretical Biology 144 (4): 517–546.
doi:10.1016/S0022-5193(05)80088-8. PMID 2402153.
[18] Spence, A. M. (1974). Market Signaling: Information Transfer in Hiring and Related Processes. Cambridge: Harvard University Press.
[19] Getty, T. (1998). "Handicap signalling: when fecundity and viability do not add up". Animal Behaviour 56 (1): 127–130.
doi:10.1006/anbe.1998.0744.
[20] Getty, T. (1998). "Reliable signalling need not be a handicap". Animal Behaviour 56: 253–255. doi:10.1006/anbe.1998.0748.
[21] Getty, T. (2002). "Signaling health versus parasites". The American Naturalist 159 (4): 363–371. doi:10.1086/338992.
[22] Getty, T. (2006). "Sexually selected signals are not similar to sports handicaps". Trends in Ecology & Evolution 21 (2): 83–88.
doi:10.1016/j.tree.2005.10.016.
[23] Godfray, H. C. J. (1991). "Signalling of need by offspring to their parents". Nature 352 (6333): 328–330. doi:10.1038/352328a0.
[24] Yachi, S. (1995). "How can honest signalling evolve? The role of the handicap principle". Proceedings of the Royal Society of London B 262
(1365): 283–288. doi:10.1098/rspb.1995.0207.
Cheap talk 98

Cheap talk
In game theory, cheap talk is communication between players which does not directly affect the payoffs of the
game. This is in contrast to signaling in which sending certain messages may be costly for the sender depending on
the state of the world. The classic example is of an expert (say, ecological) trying to explain the state of the world to
an uninformed decision maker (say, politician voting on a deforestation bill). The decision maker, after hearing the
report from the expert, must then make a decision which affects the payoffs of both players.

Application
Cheap talk can, in general, be added to any game and has the potential to enhance the set of possible equilibrium
payoffs. For example, one can add a round of cheap talk in the beginning of the Battle of the Sexes. Each player
announces whether they intend to go to the football game, or the opera. Because the Battle of the Sexes is a
coordination game, this initial round of communication may enable the players to randomize among equilibria
yielding payoffs [2.5,2.5] which cannot be achieved with any pure or mixed strategy without cheap talk. The
messages and strategies which yield this outcome are symmetric for each player. They are: 1) announce opera or
football with even probability 2) if a person announces opera(or football), then upon hearing this message the other
person will say opera(or football) as well (Farrell and Rabin, 1996). If they both announce different options, then no
coordination is achieved.
It is not guaranteed, however, that cheap talk will have an effect on equilibrium payoffs. Another game, the
Prisoner's Dilemma, is a game whose only equilibrium is in dominant strategies. Any pre-play cheap talk will be
ignored and players will play their dominant strategies (Defect,Defect) regardless of the messages sent.

Biological applications
It has been commonly argued that cheap talk will have no effect on the underlying structure of the game. In biology
authors have often argued that costly signalling best explains signalling between animals (see Handicap principle,
Signalling theory). This general belief has been receiving some challenges (see work by Carl Bergstrom[1] and Brian
Skyrms 2002, 2004). In particular, several models using evolutionary game theory indicate that cheap talk can have
effect on the evolutionary dynamics of particular games.

Notes
[1] The Biology of Information. (http:/ / octavia. zoology. washington. edu/ information_overview. html)

References
• Crawford, V. P.; Sobel, J. (1982). "Strategic Information Transmission". Econometrica 50 (6): 1431–1451.
doi:10.2307/1913390.
• Farrell, J.; Rabin, M. (1996). "Cheap Talk". Journal of Economic Perspectives 10 (3): 103–118. JSTOR 2138522.
• Robson, A. J. (1990). "Efficiency in Evolutionary Games: Darwin, Nash, and the Secret Handshake". Journal of
Theoretical Biology 144 (3): 379–396. doi:10.1016/S0022-5193(05)80082-7.
• Skyrms, B. (2002). "Signals, Evolution and the Explanatory Power of Transient Information". Philosophy of
Science 69 (3): 407–428. doi:10.1086/342451.
• Skyrms, B. (2004). The Stag Hunt and the Evolution of Social Structure. New York: Cambridge University Press.
ISBN 0521826519.
Zero-sum 99

Zero-sum
In game theory and economic theory, a zero-sum game is a mathematical representation of a situation in which a
participant's gain (or loss) of utility is exactly balanced by the losses (or gains) of the utility of other participant(s). If
the total gains of the participants are added up, and the total losses are subtracted, they will sum to zero. Thus cutting
a cake, where taking a larger piece reduces the amount of cake available for others, is a zero-sum game if all
participants value each unit of cake equally (see marginal utility). In contrast, non-zero-sum describes a situation in
which the interacting parties' aggregate gains and losses are either less than or more than zero. A zero-sum game is
also called a strictly competitive game. Zero-sum games are most often solved with the minimax theorem which is
closely related to linear programming duality,[1] or with Nash equilibrium.

Definition
The zero-sum property (if one gains, another loses) means that any result of a zero-sum situation is Pareto optimal
(generally, any game where all strategies are Pareto optimal is called a conflict game).[2]
Situations where participants can all gain or suffer together are referred to as non-zero-sum. Thus, a country with an
excess of bananas trading with another country for their excess of apples, where both benefit from the transaction, is
in a non-zero-sum situation. Other non-zero-sum games are games in which the sum of gains and losses by the
players are sometimes more or less than what they began with.

Solution
For 2-player finite zero-sum games, the different game theoretic solution concepts of Nash equilibrium, minimax,
and maximin all give the same solution. In the solution, players play a mixed strategy.

Example

A zero-sum game

A B C
1 30, -30 -10, 10 20, -20
2 10, -10 20, -20 -20, 20

A game's payoff matrix is a convenient representation. Consider for example the two-player zero-sum game pictured
at right.
The order of play proceeds as follows: The first player (red) chooses in secret one of the two actions 1 or 2; the
second player (blue), unaware of the first player's choice, chooses in secret one of the three actions A, B or C. Then,
the choices are revealed and each player's points total is affected according to the payoff for those choices.
Example: Red chooses action 2 and Blue chooses action B. When the payoff is allocated, Red gains 20 points and
Blue loses 20 points.
Now, in this example game both players know the payoff matrix and attempt to maximize the number of their points.
What should they do?
Red could reason as follows: "With action 2, I could lose up to 20 points and can win only 20, while with action 1 I
can lose only 10 but can win up to 30, so action 1 looks a lot better." With similar reasoning, Blue would choose
action C. If both players take these actions, Red will win 20 points. But what happens if Blue anticipates Red's
reasoning and choice of action 1, and goes for action B, so as to win 10 points? Or if Red in turn anticipates this
devious trick and goes for action 2, so as to win 20 points after all?
Zero-sum 100

Émile Borel and John von Neumann had the fundamental and surprising insight that probability provides a way out
of this conundrum. Instead of deciding on a definite action to take, the two players assign probabilities to their
respective actions, and then use a random device which, according to these probabilities, chooses an action for them.
Each player computes the probabilities so as to minimise the maximum expected point-loss independent of the
opponent's strategy. This leads to a linear programming problem with the optimal strategies for each player. This
minimax method can compute provably optimal strategies for all two-player zero-sum games.
For the example given above, it turns out that Red should choose action 1 with probability 4/7 and action 2 with
probability 3/7, while Blue should assign the probabilities 0, 4/7, and 3/7 to the three actions A, B, and C. Red will
then win 20/7 points on average per game.

Solving
The Nash equilibrium for a two-player, zero-sum game can be found by solving a linear programming problem.
Suppose a zero-sum game has a payoff matrix where element is the payoff obtained when the minimizing
player chooses pure strategy and the maximizing player chooses pure strategy (i.e. the player trying to
minimize the payoff chooses the row and the player trying to maximize the payoff chooses the column). Assume
every element of is positive. The game will have at least one Nash equilibrium. The Nash equilibrium can be
found (see ref. [2], page 740) by solving the following linear program to find a vector :
Minimize:

Subject to the constraints:


≥0
≥ 1.
The first constraint says each element of the vector must be nonnegative, and the second constraint says each
element of the vector must be at least 1. For the resulting vector, the inverse of the sum of its elements is
the value of the game. Multiplying by that value gives a probability vector, giving the probability that the
maximizing player will choose each of the possible pure strategies.
If the game matrix does not have all positive elements, simply add a constant to every element that is large enough to
make them all positive. That will increase the value of the game by that constant, and will have no effect on the
equilibrium mixed strategies for the equilibrium.
The equilibrium mixed strategy for the minimizing player can be found by solving the dual of the given linear
program. Or, it can be found by using the above procedure to solve a modified payoff matrix which is the transpose
and negation of (adding a constant so it's positive), then solving the resulting game.
If all the solutions to the linear program are found, they will constitute all the Nash equilibria for the game.
Conversely, any linear program can be converted into a two-player, zero-sum game by using a change of variables
that puts it in the form of the above equations. So such games are equivalent to linear programs, in general.

Non-zero-sum

Economics
Many economic situations are not zero-sum, since valuable goods and services can be created, destroyed, or badly
allocated, and any of these will create a net gain or loss. Assuming the counterparties are acting rationally with
symmetric information, any commercial exchange is a non-zero-sum activity, because each party must consider the
goods it is receiving as being at least fractionally more valuable than the goods it is delivering. Economic exchanges
must benefit both parties enough above the zero-sum such that each party can overcome its transaction costs.
Zero-sum 101

See also:
• Comparative advantage
• Zero-sum fallacy
• Gains from trade
• Free trade

Psychology
The most common or simple example from the subfield of Social Psychology is the concept of "Social Traps". In
some cases we can enhance our collective well-being by pursuing our personal interests — or parties can pursue
mutually destructive behavior as they choose their own ends.

Complexity
It has been theorized by Robert Wright in his book Nonzero: The Logic of Human Destiny, that society becomes
increasingly non-zero-sum as it becomes more complex, specialized, and interdependent. As former US President
Bill Clinton states:
The more complex societies get and the more complex the networks of interdependence within and beyond
community and national borders get, the more people are forced in their own interests to find non-zero-sum
solutions. That is, win–win solutions instead of win–lose solutions.... Because we find as our interdependence
increases that, on the whole, we do better when other people do better as well — so we have to find ways that
we can all win, we have to accommodate each other....
—Bill Clinton, Wired interview, December 2000.[3]

Extensions
In 1944 John von Neumann and Oskar Morgenstern proved that any zero-sum game involving n players is in fact a
generalized form of a zero-sum game for two players, and that any non-zero-sum game for n players can be reduced
to a zero-sum game for n + 1 players; the (n + 1) player representing the global profit or loss.[4]

Misunderstandings
Zero–sum games and particularly their solutions are commonly misunderstood by critics of game theory, usually
with respect to the independence and rationality of the players, as well as to the interpretation of utility functions.
Furthermore, the word "game" does not imply the model is valid only for recreational games.[1]

References
[1] Ken Binmore (2007). Playing for real: a text on game theory (http:/ / books. google. com/ books?id=eY0YhSk9ujsC). Oxford University
Press US. ISBN 9780195300574. ., chapters 1 & 7
[2] Bowles, Samuel (2004). Microeconomics: Behavior, Institutions, and Evolution. Princeton University Press. pp. 33–36. ISBN 0-691-09163-3.
[3] "Wired 8.12: Bill Clinton" (http:/ / www. wired. com/ wired/ archive/ 8. 12/ clinton. html). Wired.com. 2009-01-04. . Retrieved 2010-06-17.
[4] "Theory of Games and Economic Behavior" (http:/ / www. archive. org/ stream/ theoryofgamesand030098mbp#page/ n70/ mode/ 1up/
search/ reduce). Princeton University Press (1953). (Digital publication date)2005-06-25. . Retrieved 2010-11-11.

Further reading
• "Misstating the Concept of Zero-Sum Games within the Context of Professional Sports Trading Strategies".
Created by Tony Kornheiser and Michael Wilbon. Performance by William Simmons. Pardon the Interruption.
ESPN. 2010-09-23.
Zero-sum 102

• Raghavan, T. E. S. (1994). "Zero-sum two-person games". In Aumann; Hart. Handbook of Game Theory. 2.
Amsterdam: Elsevier. pp. 735–759. ISBN 0-444-89427-6.

External links
• Play zero-sum games online (http://www.egwald.ca/operationsresearch/twoperson.php) by Elmer G. Wiens.
• Game Theory & its Applications (http://www.le.ac.uk/psychology/amc/gtaia.html) - comprehensive text on
psychology and game theory. (Contents and Preface to Second Edition.)

Mechanism design
Mechanism design (sometimes called
reverse game theory[1] ) is a field in
game theory studying solution
concepts for a class of private
information games. The distinguishing
features of these games are:

• that a game "designer" chooses the


game structure rather than
inheriting one
• that the designer is interested in the
game's outcome
Such a game is called a "game of
mechanism design" and is usually
solved by motivating agents to disclose The Stanley Reiter diagram above illustrates a game of mechanism design. The upper-left
their private information. The 2007 space depicts the type space and the upper-right space X the space of outcomes. The
social choice function maps a type profile to an outcome. In games of mechanism
Nobel Memorial Prize in Economic
design, agents send messages in a game environment . The equilibrium in the
Sciences was awarded to Leonid
game can be designed to implement some social choice function
Hurwicz, Eric Maskin, and Roger
.
Myerson "for having laid the
foundations of mechanism design theory".[2]

Intuition
In an interesting class of Bayesian games, one player, called the “principal,” would like to condition his behavior on
information privately known to other players. For example, the principal would like to know the true quality of a
used car a salesman is pitching. He cannot learn anything simply by asking the salesman because it is in his interest
to distort the truth. Fortunately, in mechanism design the principal does have one advantage. He may design a game
whose rules can influence others to act the way he would like.
Absent mechanism design theory the principal's problem would be difficult to solve. He would have to consider all
the possible games and choose the one that best influences other players' tactics. In addition the principal would have
to draw conclusions from agents who may lie to him. Thanks to mechanism design, and particularly the revelation
principle, the principal need only consider games in which agents truthfully report their private information.
Mechanism design 103

Foundations

Mechanism
A game of mechanism design is a game of private information in which one of the agents, called the principal,
chooses the payoff structure. Following Harsanyi (1967), the agents receive secret "messages" from nature
containing information relevant to payoffs. For example, a message may contain information about their preferences
or the quality of a good for sale. We call this information the agent's "type" (usually noted and accordingly the
space of types ). Agents then report a type to the principal (usually noted with a hat ) that can be a strategic lie.
After the report, the principal and the agents are paid according to the payoff structure the principal chose.
The timing of the game is:
1. The principal commits to a mechanism that grants an outcome as a function of reported type
2. The agents report, possibly dishonestly, a type profile
3. The mechanism is executed (agents receive outcome )
In order to understand who gets what, it is common to divide the outcome into a goods allocation and a money
transfer, where stands for an allocation of goods rendered or received
as a function of type, and stands for a monetary transfer as a function of type.
As a benchmark the designer often defines what would happen under full information. Define a social choice
function mapping the (true) type profile directly to the allocation of goods received or rendered,

In contrast a mechanism maps the reported type profile to an outcome (again, both a goods allocation and a
money transfer )

Revelation principle
A proposed mechanism constitutes a Bayesian game (a game of private information), and if it is well-behaved the
game has a Bayesian Nash equilibrium. At equilibrium agents choose their reports strategically as a function of type

It is difficult to solve for Bayesian equilibria in such a setting because it involves solving for agents' best-response
strategies and for the best inference from a possible strategic lie. Thanks to a sweeping result called the revelation
principle, no matter the mechanism a designer can[3] confine attention to equilibria in which agents truthfully report
type. The revelation principle states: "For any Bayesian Nash equilibrium there corresponds a Bayesian game with
the same equilibrium outcome but in which players truthfully report type."
This is extremely useful. The principle allows one to solve for a Bayesian equilibrium by assuming all players
truthfully report type (subject to an incentive compatibility constraint). In one blow it eliminates the need to consider
either strategic behavior or lying.
Its proof is quite direct. Assume a Bayesian game in which the agent's strategy and payoff are functions of its type
and what others do, . By definition agent i's equilibrium strategy is Nash in
expected utility:

Simply define a mechanism that would induce agents to choose the same equilibrium. The easiest one to define is for
the mechanism to commit to playing the agents' equilibrium strategies for them.
Mechanism design 104

Under such a mechanism the agents of course find it optimal to reveal type since the mechanism plays the strategies
they found optimal anyway. Formally, choose such that

Implementability
The designer of a mechanism generally hopes either
• to design a mechanism that "implements" a social choice function
• to find the mechanism that maximizes some value criterion (e.g. profit)
To implement a social choice function is to find some transfer function that motivates agents to pick
outcome . Formally, if the equilibrium strategy profile under the mechanism maps to the same goods
allocation as a social choice function,

we say the mechanism implements the social choice function.


Thanks to the revelation principle, the designer can usually find a transfer function to implement a social
choice by solving an associated truthtelling game. If agents find it optimal to truthfully report type,

we say such a mechanism is truthfully implementable (or just "implementable"). The task is then to solve for a
truthfully implementable and impute this transfer function to the original game. An allocation is
truthfully implementable if there exists a transfer function such that

which is also called the incentive compatibility (IC) constraint.


In applications, the IC condition is the key to describing the shape of in any useful way. Under certain
conditions it can even isolate the transfer function analytically! Additionally, a participation (individual rationality)
constraint is sometimes added if agents have the option of not playing.

Necessity
Consider a setting in which all agents have a type-contingent utility function . Consider also a goods
allocation that is vector-valued and size (which permits number of goods) and assume it is piecewise
continuous with respect to its arguments.
The function is implementable only if

whenever and and x is continuous at . This is a necessary condition and is derived from the
first- and second-order conditions of the agent's optimization problem assuming truth-telling.
Its meaning can be understood in two pieces. The first piece says the agent's marginal rate of substitution increases
as a function of the type,

In short, agents will not tell the truth if the mechanism does not offer higher agent types a better deal. Otherwise,
higher types facing any mechanism that punishes high types for reporting will lie and declare they are lower types,
Mechanism design 105

violating the truthtelling IC constraint. The second piece is a monotonicity condition waiting to happen,

which, to be positive, means higher types must be given more of the good.
There is potential for the two pieces to interact. If for some type range the contract offered less quantity to higher
types , it is possible the mechanism could compensate by giving higher types a discount. But such a
contract already exists for low-type agents, so this solution is pathological. Such a solution sometimes occurs in the
process of solving for a mechanism. In these cases it must be "ironed." In a multiple-good environment it is also
possible for the designer to reward the agent with more of one good to substitute for less of another (e.g. butter for
margarine). Multiple-good mechanisms are an ongoing problem in mechanism design theory.

Sufficiency
Mechanism design papers usually make two assumptions to ensure implementability:

This is known by several names: the single-crossing condition, the sorting condition and the Spence-Mirrlees
condition. It means the utility function is of such a shape that the agent's MRS is increasing in type.

This is a technical condition bounding the rate of growth of the MRS.


These assumptions are sufficient to provide that any monotonic is implementable (a exists that can
implement it). In addition, in the single-good setting the single-crossing condition is sufficient to provide that only a
monotonic is implementable, so the designer can confine his search to a monotonic .

Highlighted results

Revenue equivalence theorem


Vickrey (1961) gives a celebrated result that any member of a large class of auctions assures the seller of the same
expected revenue and that the expected revenue is the best the seller can do. This is the case if
1. The buyers have identical valuation functions (which may be a function of type)
2. The buyers' types are independently distributed
3. The buyers types are drawn from a continuous distribution
4. The type distribution bears the monotone hazard rate property
5. The mechanism sells the good to the buyer with the highest valuation
The last condition is crucial to the theorem. An implication is that for the seller to achieve higher revenue he must
take a chance on giving the item to an agent with a lower valuation. Usually this means he must risk not selling the
item at all.

Vickrey–Clarke–Groves mechanisms
The Vickrey (1961) auction model was later expanded by Clarke (1971) and Groves (1973) to treat a public choice
problem in which a public project's cost is borne by all agents, e.g. whether to build a municipal bridge. The
resulting "Vickrey–Clarke–Groves" mechanism can motivate agents to choose the socially efficient allocation of the
public good even if agents have privately known valuations. In other words, it can solve the "tragedy of the
commons"—under certain conditions, in particular quasilinear utility or if budget balance is not required.
Mechanism design 106

Consider a setting in which number of agents have quasilinear utility with private valuations where the
currency is valued linearly. The VCG designer designs an incentive compatible (hence truthfully implementable)
mechanism to obtain the true type profile, from which the designer implements the socially optimal allocation

The cleverness of the VCG mechanism is the way it motivates truthful revelation. It eliminates incentives to
misreport by penalizing any agent by the cost of the distortion he causes. Among the reports the agent may make, the
VCG mechanism permits a "null" report saying he is indifferent to the public good and cares only about the money
transfer. This effectively removes the agent from the game. If an agent does choose to report a type, the VCG
mechanism charges the agent a fee if his report is pivotal, that is if his report changes the optimal allocation x so as
to harm other agents. The payment is calculated

which sums the distortion in the utilities of the other agents (and not his own) caused by one agent reporting.

Gibbard-Satterthwaite theorem
Gibbard (1973) and Satterthwaite (1975) give an impossibility result similar in spirit to Arrow's impossibility
theorem. For a very general class of games, only "dictatorial" social choice functions can be implemented.
A social choice function f() is dictatorial if one agent always receives his most-favored goods allocation,

The theorem states that under general conditions any truthfully implementable social choice function must be
dictatorial,
1. X finite and contains at least three elements
2. Preferences are rational
3.

Myerson-Satterthwaite theorem
Myerson and Satterthwaite (1983) show there is no efficient way for two parties to trade a good when they each have
secret and probabilistically varying valuations for it, without the risk of forcing one party to trade at a loss. It is
among the most remarkable negative results in economics—a kind of negative mirror to the fundamental theorems of
welfare economics.

Examples

Price discrimination
Mirrlees (1971) introduces a setting in which the transfer function t() is easy to solve for. Due to its relevance and
tractability it is a common setting in the literature. Consider a single-good, single-agent setting in which the agent
has quasilinear utility with an unknown type parameter

and in which the principal has a prior CDF over the agent's type . The principal can produce goods at a
convex marginal cost c(x) and wants to maximize the expected profit from the transaction

subject to IC and IR conditions


Mechanism design 107

The principal here is a monopolist trying to set a profit-maximizing price scheme in which it cannot identify the type
of the customer. A common example is an airline setting fares for business, leisure and student travelers. Due to the
IR condition it has to give every type enough a good enough deal to induce participation. Due to the IC condition it
has to give every type a good enough deal that the type prefers its deal to that of any other.
A trick given by Mirrlees (1971) is to use the envelope theorem to eliminate the transfer function from the
expectation to be maximized,

Integrating,

where is some index type. Replacing the incentive-compatible in the maximand,

after an integration by parts. This function can be maximized pointwise, a fantastic result because it dispenses with
the need to use the calculus of variations.
Because is incentive-compatible already the designer can drop the IC constraint. If the utility function
satisfies the Spence-Mirrlees condition then a monotonic function exists. The IR constraint can be checked at
equilibrium and the fee schedule raised or lowered accordingly. Additionally, note the presence of a hazard rate in
the expression. If the type distribution bears the monotone hazard ratio property, the FOC is sufficient to solve for
t(). If not, then it is necessary to check whether the monotonicity constraint (see sufficiency, above) is satisfied
everywhere along the allocation and fee schedules. If not, then the designer must use Myerson ironing.
Mechanism design 108

Myerson ironing
In some applications the designer may
solve the first-order conditions for the
price and allocation schedules yet find
they are not monotonic. For example,
in the quasilinear setting this often
happens when the hazard ratio is itself
not monotone. By the Spence-Mirrlees
condition the optimal price and
allocation schedules must be
monotonic, so the designer must
eliminate any interval over which the
schedule changes direction by
flattening it.
Intuitively, what is going on is the
designer finds it optimal to bunch
certain types together and give them
It is possible to solve for a goods or price schedule that satisfies the first-order conditions
the same contract. Normally the
yet is not monotonic. If so it is necessary to "iron" the schedule by choosing some value
designer motivates higher types to at which to flatten the function.
distinguish themselves by giving them
a better deal. If there are insufficiently few higher types on the margin the designer does not find it worthwhile to
grant lower types a concession (called their information rent) in order to charge higher types a type-specific contract.

Consider a monopolist principal selling to agents with quasilinear utility, the example above. Suppose the allocation
schedule satisfying the first-order conditions has a single interior peak at and a single interior trough at
, illustrated at right.
• Following Myerson (1981) flatten it by choosing satisfying

where is the inverse function of x mapping to and is the inverse function of x


mapping to . That is, returns a before the interior peak and returns a after the interior
trough.
• If the nonmonotonic region of borders the edge of the type space, simply set the appropriate function
(or both) to the boundary type. If there are multiple regions, see a textbook for an iterative procedure; it may be
that more than one troughs should be ironed together.

Proof

The proof uses the theory of optimal control. It considers the set of intervals in the nonmonotonic region of

over which it might flatten the schedule. It then writes a Hamiltonian to obtain necessary conditions for a
within the intervals
1. that does satisfy monotonicity
2. for which the monotonicity constraint is not binding on the boundaries of the interval
Condition two ensures that the satisfying the optimal control problem reconnects to the schedule in the
original problem at the interval boundaries (no jumps). Any satisfying the necessary conditions must be flat
because it must be monotonic and yet reconnect at the boundaries.
Mechanism design 109

As before maximize the principal's expected payoff, but this time subject to the monotonicity constraint

and use a Hamiltonian to do it, with shadow price

where is a state variable and the control. As usual in optimal control the costate evolution equation must
satisfy

Taking advantage of condition 2, note the monotonicity constraint is not binding at the boundaries of the interval,

meaning the costate variable condition can be integrated and also equals 0

The average distortion of the principal's surplus must be 0. To flatten the schedule, find an such that its inverse
image maps to a interval satisfying the condition above.

Notes
[1] "Multiagent Systems" (http:/ / staff. science. uva. nl/ ~ulle/ teaching/ mas/ slides/ mas-mechanism-design-4up. pdf), from the University of
Amsterdam.
[2] "The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2007" (http:/ / nobelprize. org/ nobel_prizes/ economics/
laureates/ 2007/ press. html) (Press release). Nobel Foundation. October 15, 2007. . Retrieved 2008-08-15.
[3] In unusual circumstances some truth-telling games have more equilibria than the Bayesian game they mapped from. See Fudenburg-Tirole
Ch. 7.2 for some references.

References
• Chapter 7 of Fudenberg, Drew; Tirole, Jean (1991), Game Theory (http://www.amazon.com/
Game-Theory-Drew-Fudenberg/dp/0262061414/ref=sr_1_1?ie=UTF8&s=books&qid=1245092433&sr=8-1),
Boston: MIT Press, ISBN 978-0262061414. A standard text for graduate game theory.
• Chapter 23 of Mas-Colell; Whinston; Green (1995), Microeconomic Theory (http://www.amazon.com/
Microeconomic-Theory-Andreu-Mas-Colell/dp/0195073401/ref=sr_1_1?ie=UTF8&s=books&
qid=1245092701&sr=1-1), Oxford: Oxford Press, ISBN 978-0195073409. A standard text for graduate
microeconomics.
• Milgrom, Paul (2004), Putting Auction Theory to Work (http://www.amazon.com/
Putting-Auction-Churchill-Lectures-Economics/dp/0521536723/ref=sr_1_1?ie=UTF8&s=books&
qid=1245365630&sr=8-1), New York: Cambridge University Press, ISBN 978-0-521-55184-7. Applications of
mechanism design principles in the context of auctions.
• Noam Nisan (http://www.cs.huji.ac.il/~noam/). A Google tech talk (http://www.youtube.com/
watch?v=Ps5aYsG8jY0) on mechanism design.
• Roger B. Myerson (2008). "mechanism design," The New Palgrave Dictionary of Economics Online, Abstract.
(http://www.dictionaryofeconomics.com/article?id=pde2008_M000132&q=Mechanism design&topicid=&
result_number=2)
Bargaining Problem 110

Bargaining Problem
The two person bargaining problem is a problem of understanding how two agents should cooperate when
non-cooperation leads to Pareto-inefficient results. It is in essence an equilibrium selection problem; Many games
have multiple equilibria with varying payoffs for each player, forcing the players to negotiate on which equilibrium
to target. The quintessential example of such a game is the Ultimatum game. The underlying assumption of
bargaining theory is that the resulting solution should be the same solution an impartial arbitrator would recommend.
Solutions to bargaining come in two flavors: an axiomatic approach where desired properties of a solution are
satisfied and a strategic approach where the bargaining procedure is modeled in detail as a sequential game.

The bargaining game


The bargaining game or Nash bargaining game is a simple two-player game used to model bargaining
interactions. In the Nash Bargaining Game two players demand a portion of some good (usually some amount of
money). If the total amount requested by the players is less than that available, both players get their request. If their
total request is greater than that available, neither player gets their request. A Nash bargaining solution is a (Pareto
efficient) solution to a Nash bargaining game. According to Walker (2005), Nash's bargaining solution was shown
by John Harsanyi to be the same as Zeuthen's solution of the bargaining problem (Problems of Monopoly and
Economic Warfare, 1930).

An example
Opera Football

Opera 3,2 0,0

Football 0,0 2,3

Battle of the Sexes 1

The Battle of the Sexes, as shown, is a two player coordination game. Both Opera/Opera and Football/Football are
Nash equilibria. Any probability distribution over these two Nash equilibria is a correlated equilibrium. The question
then becomes which of the infinitely many possible equilibria should be chosen by the two players. If they disagree
and choose different distributions, they are likely receive 0 payoffs. In this symmetric case the natural choice is to
play Opera/Opera and Football/Football with equal probability. Indeed all bargaining solutions described below
prescribe this solution. However, if the game is asymmetric --- for example, Football/Football instead yields payoffs
of 2,5 --- the appropriate distribution is less clear. The problem of finding such a distribution is addressed by the
bargaining theory.

Formal description
A two person bargain problem consists of a disagreement, or threat, point , where and are the
respective payoffs to players 1 and player 2, and a feasibility set , a closed convex subset of , the elements of
which are interpreted as agreements. Set is convex because an agreement could take the form of a correlated
combination of other agreements. The problem is nontrivial if agreements in are better for both parties than the
disagreement. The goal of bargaining is to choose the feasible agreement in that could result from
negotiations.
Bargaining Problem 111

Feasibility set
Which agreements are feasible depends on whether bargaining is mediated by an additional party. When binding
contracts are allowed, any joint action is playable, and the feasibility set consists of all attainable payoffs better than
the disagreement point. When binding contracts are unavailable, the players can defect (moral hazard), and the
feasibility set is composed of correlated equilibria, since these outcomes require no exogenous enforcement.

Disagreement point
The disagreement point is the value the players can expect to receive if negotiations break down. This could be
some focal equilibrium that both players could expect to play. This point directly affects the bargaining solution,
however, so it stands to reason that each player should attempt to choose his disagreement point in order to
maximize his bargaining position. Towards this objective, it is often advantageous to increase one's own
disagreement payoff while harming the opponent's disagreement payoff (hence the intrepretation of the disagreement
as a threat). If threats are viewed as actions, then one can construct a separate game wherein each player chooses a
threat and receives a payoff according to the outcome of bargaining. It is known as Nash's variable threat game.
Alternatively, each player could play a minimax strategy in case of disagreement, choosing to disregard personal
reward in order to hurt the opponent as much as possible shoud the opponent leave the bargaining table.

Equilibrium analysis
Strategies are represented in the Nash bargaining game by a pair (x, y). x and y are selected from the interval [d, z],
where z is the total good. If x + y is equal to or less than z, the first player receives x and the second y. Otherwise
both get d. d here represents the disagreement point or the threat of the game; often .
There are many Nash equilibria in the Nash bargaining game. Any x and y such that x + y = z is a Nash equilibrium.
If either player increases their demand, both players receive nothing. If either reduces their demand they will receive
less than if they had demanded x or y. There is also a Nash equilibrium where both players demand the entire good.
Here both players receive nothing, but neither player can increase their return by unilaterally changing their strategy.

Bargaining solutions
Various solutions have been proposed based on slightly different assumptions about what properties are desired for
the final agreement point.

Nash bargaining solution


John Nash proposed that a solution should satisfy certain axioms:
1. Invariant to affine transformations or Invariant to equivalent utility representations
2. Pareto optimality
3. Independence of irrelevant alternatives
4. Symmetry
Let u and v be the utility functions of Player 1 and Player 2, respectively. In the Nash bargaining solution, the
players will seek to maximize , where and , are the status quo
utilities (i.e. the utility obtained if one decides not to bargain with the other player). The product of the two excess
utilities is generally referred to as the Nash product.
Bargaining Problem 112

Kalai-Smorodinsky bargaining solution


Independence of Irrelevant Alternatives can be substituted with a monotonicity condition, as demonstrated by Ehud
Kalai and Meir Smorodinsky. It is the point which maintains the ratios of maximal gains. In other words, if player 1
could receive a maximum of with player 2’s help (and vice-versa for ), then the Kalai-Smorodinsky
bargaining solution would yield the point on the Pareto frontier such that .

Egalitarian bargaining solution


The egalitarian bargaining solution, introduced by Ehud Kalai, is a third solution which drops the condition of scale
invariance while including both the axiom of Independence of irrelevant alternatives, and the axiom of monotonicity.
It is the solution which attempts to grant equal gain to both parties. In other words, it is the point which maximizes
the minimum payoff among players.

Applications
Some philosophers and economists have recently used the Nash bargaining game to explain the emergence of human
attitudes toward distributive justice (Alexander 2000; Alexander and Skyrms 1999; Binmore 1998, 2005). These
authors primarily use evolutionary game theory to explain how individuals come to believe that proposing a 50-50
split is the only just solution to the Nash Bargaining Game.

References
• Alexander, Jason McKenzie (2000). "Evolutionary Explanations of Distributive Justice". Philosophy of Science
67 (3): 490–516. JSTOR 188629.
• Alexander, Jason; Skyrms, Brian (1999). "Bargaining with Neighbors: Is Justice Contagious". Journal of
Philosophy 96 (11): 588–598. JSTOR 2564625.
• Binmore, K.; Rubinstein, A.; Wolinsky, A. (1986). "The Nash Bargaining Solution in Economic Modelling".
RAND Journal of Economics 17: 176–188. JSTOR 2555382.
• Binmore, Kenneth (1998). Game Theory and The Social Contract Volume 2: Just Playing. Cambridge: MIT
Press. ISBN 0262024446.
• Binmore, Kenneth (2005). Natural Justice. New York: Oxford University Press. ISBN 0195178114.
• Kalai, Ehud (1977). "Proportional solutions to bargaining situations: Intertemporal utility comparisons".
Econometrica 45 (7): 1623–1630. JSTOR 1913954.
• Kalai, Ehud & Smorodinsky, Meir (1975). "Other solutions to Nash’s bargaining problem". Econometrica 43 (3):
513–518. JSTOR 1914280.
• Nash, John (1950). "The Bargaining Problem". Econometrica 18 (2): 155–162. JSTOR 1907266.
• Walker, Paul (2005). "History of Game Theory" [1].

External links
• Nash Bargaining Solutions [2]

References
[1] http:/ / www. econ. canterbury. ac. nz/ personal_pages/ paul_walker/ gt/ hist. htm#ref94
[2] http:/ / www. cse. iitd. ernet. in/ ~rahul/ cs905/ lecture15/
Stochastic game 113

Stochastic game
In game theory, a stochastic game, introduced by Lloyd Shapley in the early 1950s, is a dynamic game with
probabilistic transitions played by one or more players. The game is played in a sequence of stages. At the
beginning of each stage the game is in some state. The players select actions and each player receives a payoff that
depends on the current state and the chosen actions. The game then moves to a new random state whose distribution
depends on the previous state and the actions chosen by the players. The procedure is repeated at the new state and
play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted
sum of the stage payoffs or the limit inferior of the averages of the stage payoffs.
Stochastic games generalize both Markov decision processes and repeated games.

Theory
The ingredients of a stochastic game are: a finite set of players ; a state space (either a finite set or a
measurable space ); for each player , an action set (either a finite set or a measurable space
); a transition probability from , where is the action profiles, to , where
is the probability that the next state is in given the current state and the current action profile
; and a payoff function from to , where the -th coordinate of , , is the payoff to player
as a function of the state and the action profile .
The game starts at some initial state . At stage , players first observe , then simultaneously choose
actions , then observe the action profile , and then nature selects according to the
probability . A play of the stochastic game, , defines a stream of payoffs
, where .
The discounted game with discount factor ( ) is the game where the payoff to player is

. The -stage game is the game where the payoff to player is .

The value , respectively , of a two-person zero-sum stochastic game , respectively , with


finitely many states and actions exists, and Truman Bewley and Elon Kohlberg (1976) proved that
converges to a limit as goes to infinity and that converges to the same limit as goes to .
The "undiscounted" game is the game where the payoff to player is the "limit" of the averages of the stage
payoffs. Some precautions are needed in defining the value of a two-person zero-sum and in defining
equilibrium payoffs of a non-zero-sum . The uniform value of a two-person zero-sum stochastic game
exists if for every there is a positive integer and a strategy pair of player 1 and of player 2
such that for every and and every the expectation of with respect to the probability on plays
defined by and is at least , and the expectation of with respect to the probability on plays defined
by and is at most . Jean Francois Mertens and Abraham Neyman (1981) proved that every
two-person zero-sum stochastic game with finitely many states and actions has a uniform value.
If there is a finite number of players and the action sets and the set of states are finite, then a stochastic game with a
finite number of stages always has a Nash equilibrium. The same is true for a game with infinitely many stages if the
total payoff is the discounted sum. Nicolas Vieille has shown that all two-person stochastic games with finite state
and action spaces have approximate Nash equilibria when the total payoff is the limit inferior of the averages of the
stage payoffs. Whether such equilibria exist when there are more than two players is a challenging open question.
Stochastic game 114

Applications
Stochastic games have applications in economics, evolutionary biology and computer networks.[1] They are
generalizations of repeated games which correspond to the special case where there is only one state.

Referring book
The most complete reference is the book of articles edited by Neyman and Sorin. The more elementary book of Filar
and Vrieze provides a unified rigorous treatment of the theories of Markov Decision Processes and two-person
stochastic games. They coin the term Competitive MDPs to encompass both one- and two-player stochastic games.

Notes
[1] Constrained Stochastic Games in Wireless Networks (http:/ / www-net. cs. umass. edu/ ~sadoc/ mdp/ main. pdf) by E.Altman,
K.Avratchenkov, N.Bonneau, M.Debbah, R.El-Azouzi, D.S.Menasche

Further reading
• Condon, A. (1992). "The complexity of stochastic games". Information and Computation 96: 203–224.
doi:10.1016/0890-5401(92)90048-K.
• H. Everett (1957). "Recursive games". In Melvin Dresher, Albert William Tucker, Philip Wolfe. Contributions to
the Theory of Games, Volume 3. Annals of Mathematics Studies. Princeton University Press. pp. 67–78.
ISBN 0691079366, ISBN 9780691079363. (Reprinted in Harold W. Kuhn, ed. Classics in Game Theory,
Princeton University Press, 1997. ISBN 978-0-691-01192-9).
• Filar, J. & Vrieze, K. (1997). Competitive Markov Decision Processes. Springer-Verlag. ISBN 0387948058.
• Mertens, J. F. & Neyman, A. (1981). "Stochastic Games". International Journal of Game Theory 10 (2): 53–66.
doi:10.1007/BF01769259.
• Neyman, A. & Sorin, S. (2003). Stochastic Games and Applications. Dordrecht: Kluwer Academic Press.
ISBN 1402014929.
• Shapley, L. S. (1953). "Stochastic games" (http://www.pnas.org/content/39/10/1095). PNAS 39 (10):
1095–1100. doi:10.1073/pnas.39.10.1095.
• Vieille, N. (2002). "Stochastic games: Recent results". Handbook of Game Theory. Amsterdam: Elsevier Science.
pp. 1833–1850. ISBN 0444880984.
• Yoav Shoham; Kevin Leyton-Brown (2009). Multiagent systems: algorithmic, game-theoretic, and logical
foundations. Cambridge University Press. pp. 153–156. ISBN 9780521899437. (suitable for undergraduates;
main results, no proofs)
Large poisson game 115

Large poisson game


In game theory the large poisson game is a game with a random number of players. More exactly, , the number
of players is a Poisson random variable. The type of each player is selected randomly independently of other players
types from a given set . Each player selects an action and then the payoffs are determined.

Formal definitions
Large Poisson game - the collection , where:
- the average number of players in the game
- the set of all possible types for a player, (same for each player).
- the probability distribution over according to which the types are selected.
- the set of all possible pure choices, (same for each player, same for each type).
- the payoff (utility) function.
The total number of players, is a poisson distributed random variable:

Stategy -
Nash equilibrium -

Simple probabilistic properties


Environmental equivalence - from the perspective of each player the number of other players is a Poisson random
variable with mean .
Decomposition property for types - the number of players of the type is a Poisson random variable with mean
.
Decomposition property for choices - the number of players who have chosen the choice is a Poisson random
variable with mean

Pivotal probability ordering Every limit of the form is equal to 0 or to infinity. This means that all pivotal

probability may be ordered from the most important to the least important.

Magnitude . This has a nice form: twice geometric mean minus arithmetic mean.

Existence of equilibrium
Theorem 1. Nash equilibrium exists.
Theorem 2. Nash equilibrium in undominated strategies exists.

Applications
Mainly large poisson games are used as models for voting procedures.

References
• Myerson, Roger B. (2000). "Large Poisson Games" [1]. Journal of Economic Theory 94 (1): 7–45.
doi:10.1006/jeth.1998.2453.
Large poisson game 116

• Myerson, Roger B. (1998). "Population Uncertainty and Poisson Games" [2]. International Journal of Game
Theory 27 (3): 375–392. doi:10.1007/s001820050079.
• De Sinopoli, Francesco; Pimienta, Carlos G. (2009). "Undominated (and) perfect equilibria in Poisson games".
Games and Economic Behavior 66 (2): 775–784. doi:10.1016/j.geb.2008.09.029.

References
[1] http:/ / ideas. repec. org/ p/ nwu/ cmsems/ 1189. html
[2] http:/ / ideas. repec. org/ p/ nwu/ cmsems/ 1102. html

Nontransitive game
A non-transitive game is a game for which the various strategies produce one or more "loops" of preferences. As a
result, in a non-transitive game the fact that strategy A is preferred over strategy B, and strategy B is preferred over
strategy C, does not necessarily imply that strategy A is preferred over strategy C. See also intransitivity, transitive
relation.
A prototypical example non-transitive game is the game Rock, Paper, Scissors which is explicitly constructed as a
non-transitive game. In probabilistic games like Penney's game, the violation of transitivity results in a more subtle
way, and is often presented as a probability paradox.

Examples
Examples of non-transitive games are:
• Rock, Paper, Scissors
• Penney's game
• Nontransitive dice

References
• Gardner, Martin (2001). The Colossal Book of Mathematics. New York: W.W. Norton. ISBN 0393020231.
Global game 117

Global game
In economics and game theory, global games are games of incomplete information where players receive
possibly-correlated signals of the underlying state of the world. Global games were originally defined by Carlsson
and van Damme (1993). The most important practical application of global games has been the study of crises in
financial markets such as bank runs, currency crises, and bubbles.

Global games in models of currency crises


Stephen Morris and Hyun Song Shin (1998) considered a stylized currency crises model, in which traders observe
the relevant fundamentals with small noise, and show that this leads to the selection of a unique equilibrium. This
result overturns the result in models of complete information, which feature multiple equilibria.
One concern with the robustness of this result is that the introduction of a theory of prices in global coordination
games may reintroduce multiplicity of equilibria (Atkeson, 2001). This concern was addressed in Angeletos and
Werning (2006) and Hellwig et. al.(2006). They show that equilibrium multiplicity may be restored by the existence
of prices acting as an endogenous public signal, provided that private information is sufficiently precise.

References
• George-Marios Angeletos and Ivan Werning (2006), "Crises and Prices: Information Aggregation, Multiplicity,
and Volatility," American Economic Review, 96 (5): 1720–36.
• Andrew G. Atkeson, (2001), "Rethinking Multiple Equilibria in Macroeconomic Modeling: Comment." In NBER
Macroeconomics Annual 2000, ed. Ben S. Bernanke and Kenneth Rogoff, 162–71. Cambridge, MA: MIT Press.
• Christian Hellwig, Arijit Mukherji and Aleh Tsyvinski (2006), "Self-Fulfilling Currency Crises: The Role of
Interest Rates," American Economic Review, 96 (5): 1769-1787.
• Stephen Morris and Hyun Song Shin (1998), "Unique Equilibrium in a Model of Self-Fulfilling Currency
Attacks," American Economic Review, 88 (3): 587–97.
• Stephen Morris & Hyun S Shin, 2001. "Global Games: Theory and Applications." [1]
• Hans Carlsson and Eric van Damme (1993), "Global Games and Equilibrium Selection," Econometrica 61 (5):
989-1018.

References
[1] http:/ / cowles. econ. yale. edu/ P/ cd/ d12b/ d1275-r. pdf
118

Games

Prisoner's dilemma
The prisoner's dilemma is a canonical example of a game, analyzed in game theory that shows why two individuals
might not cooperate, even if it appears that it is in their best interest to do so. It was originally framed by Merrill
Flood and Melvin Dresher working at RAND in 1950. Albert W. Tucker formalized the game with prison sentence
payoffs and gave it the "prisoner's dilemma" name (Poundstone, 1992). A classic example of the prisoner's dilemma
(PD) is presented as follows:
Two men are arrested, but the police do not possess enough information for a conviction. Following the
separation of the two men, the police offer both a similar deal—if one testifies against his partner
(defects/betrays), and the other remains silent (cooperates/assists), the betrayer goes free and the cooperator
receives the full one-year sentence. If both remain silent, both are sentenced to only one month in jail for a
minor charge. If each 'rats out' the other, each receives a three-month sentence. Each prisoner must choose
either to betray or remain silent; the decision of each is kept quiet. What should they do?
If it is supposed here that each player is only concerned with lessening his time in jail, the game becomes a non-zero
sum game where the two players may either assist or betray the other. In the game, the sole worry of the prisoners
seems to be increasing his own reward. The interesting symmetry of this problem is that the logical decision leads
both to betray the other, even though their individual ‘prize’ would be greater if they cooperated.
In the regular version of this game, collaboration is dominated by betraying, and as a result, the only possible
outcome of the game is for both prisoners to betray the other. Regardless of what the other prisoner chooses, one will
always gain a greater payoff by betraying the other. Because betraying is always more beneficial than cooperating,
all objective prisoners would seemingly betray the other.
In the extended form game, the game is played over and over, and consequently, both prisoners continuously have an
opportunity to penalize the other for the previous decision. If the number of times the game will be played is known,
the finite aspect of the game means that by backward induction, the two prisoners will betray each other repeatedly.
In casual usage, the label "prisoner's dilemma" may be applied to situations not strictly matching the formal criteria
of the classic or iterative games, for instance, those in which two entities could gain important benefits from
cooperating or suffer from the failure to do so, but find it merely difficult or expensive, not necessarily impossible, to
coordinate their activities to achieve cooperation.

Strategy for the classic prisoners' dilemma


The normal game is shown below:
Prisoner's dilemma 119

Prisoner B stays silent (cooperates) Prisoner B confesses (defects)

Prisoner A stays silent (cooperates) Each serves 1 month Prisoner A: 1 year


Prisoner B: goes free

Prisoner A confesses (defects) Prisoner A: goes free Each serves 3 months


Prisoner B: 1 year

Here, regardless of what the other decides, each prisoner gets a higher pay-off by betraying the other. For example,
Prisoner A can, with close certainty, state that no matter what prisoner B chooses, prisoner A is better off 'ratting him
out' (defecting) than staying silent (cooperating). As a result, solely for his own benefit, prisoner A should logically
betray him. On the other hand, if prisoner B, acts the same way, then they both have acted the same way, and both
receive a lower reward than if both were to stay quiet. Seemingly logical decisions result in both players being worse
off than if each chose to lessen the sentence of his accomplice at the cost of spending more time in jail himself.
Although they are not permitted to communicate, if the prisoners trust each other, they can both rationally choose to
remain silent, lessening the penalty for both of them.

Generalized form
We can expose the framework of the traditional Prisoners’ Dilemma by removing its original prisoner setting,
presented as the following:
There are two players and an impartial third party. Each player holds two cards, one with the word
‘collaborate’, and the other with ‘hinder’. Each player gives one card to the third person, thereby getting rid of
the possibility of the player’s knowing the other’s decision in advance. At the end of the turn, payments are
given based on the cards played.
Based on the rules of a typical understanding of the prisoner’s dilemma, if the two players are represented by colors,
red and blue, and the choices made are assigned point values it becomes clear that if the red player plays betrayal and
the blue player assists the other, red gets the T prize of 5 points while blue doesn't get payoff at all. If both cooperate
they get the R payoff of 3 points each, while if they both betray they get the P payoff of 1 point. The payoffs are
shown below.

Example PD payoff matrix


Cooperate Defect

Cooperate 3, 3 0, 5

Defect 5, 0 1, 1

In simple terms, the matrix looks like this:


Prisoner's dilemma 120

Cooperate Defect

Cooperate win-win lose more-win more

Defect win more-lose more lose-lose

It is then possible to make general the point values:

Canonical PD payoff matrix


Cooperate Defect

Cooperate R, R S, T

Defect T, S P, P

Where T means the desire to betray, R for the Repayment for total unity, P for the Punishment for total betrayal and
S for No reward. To be a prisoner's dilemma, the following must be true:
T>R>P>S
The above form guarantees that the balanced outcome is betrayal, but that collaboration rules the sense of
middle-play. In addition to the above condition, if the game is repeated more than once, the following should be
included:[1]
2R>T+S
If the above is not true, togetherness is not always necessary, as the players are, in actuality, better off by having
each player alternate between Cooperatation and Betrayal.
These rules were established by cognitive scientist Douglas Hofstadter and form the formal canonical description of
a typical game of prisoner's dilemma.
A simple special case occurs when the advantage of defection over cooperation is independent of what the co-player
does and cost of the co-player's defection is independent of one's own action, i.e. T+S = P+R.

The iterated prisoners' dilemma


If two players play prisoners' dilemma more than once in succession and they remember previous actions of their
opponent and change their strategy accordingly, the game is called iterated prisoners' dilemma.
The iterated prisoners' dilemma game is fundamental to certain theories of human cooperation and trust. On the
assumption that the game can model transactions between two people requiring trust, cooperative behaviour in
populations may be modeled by a multi-player, iterated, version of the game. It has, consequently, fascinated many
scholars over the years. In 1975, Grofman and Pool estimated the count of scholarly articles devoted to it at over
2,000. The iterated prisoners' dilemma has also been referred to as the "Peace-War game".[2]
If the game is played exactly N times and both players know this, then it is always game theoretically optimal to
defect in all rounds. The only possible Nash equilibrium is to always defect. The proof is inductive: one might as
well defect on the last turn, since the opponent will not have a chance to punish the player. Therefore, both will
defect on the last turn. Thus, the player might as well defect on the second-to-last turn, since the opponent will defect
on the last no matter what is done, and so on. The same applies if the game length is unknown but has a known upper
limit.
Unlike the standard prisoners' dilemma, in the iterated prisoners' dilemma the defection strategy is counter-intuitive
and fails badly to predict the behavior of human players. Within standard economic theory, though, this is the only
correct answer. The superrational strategy in the iterated prisoners' dilemma with fixed N is to cooperate against a
superrational opponent, and in the limit of large N, experimental results on strategies agree with the superrational
version, not the game-theoretic rational one.
Prisoner's dilemma 121

For cooperation to emerge between game theoretic rational players, the total number of rounds N must be random, or
at least unknown to the players. In this case always defect may no longer be a strictly dominant strategy, only a Nash
equilibrium. Amongst results shown by Robert Aumann in a 1959 paper, rational players repeatedly interacting for
indefinitely long games can sustain the cooperative outcome.

Strategy for the classic prisoners' dilemma


Interest in the iterated prisoners' dilemma (IPD) was kindled by Robert Axelrod in his book The Evolution of
Cooperation (1984). In it he reports on a tournament he organized of the N step prisoners' dilemma (with N fixed) in
which participants have to choose their mutual strategy again and again, and have memory of their previous
encounters. Axelrod invited academic colleagues all over the world to devise computer strategies to compete in an
IPD tournament. The programs that were entered varied widely in algorithmic complexity, initial hostility, capacity
for forgiveness, and so forth.
Axelrod discovered that when these encounters were repeated over a long period of time with many players, each
with different strategies, greedy strategies tended to do very poorly in the long run while more altruistic strategies
did better, as judged purely by self-interest. He used this to show a possible mechanism for the evolution of altruistic
behaviour from mechanisms that are initially purely selfish, by natural selection.
The best deterministic strategy was found to be tit for tat, which Anatol Rapoport developed and entered into the
tournament. It was the simplest of any program entered, containing only four lines of BASIC, and won the contest.
The strategy is simply to cooperate on the first iteration of the game; after that, the player does what his or her
opponent did on the previous move. Depending on the situation, a slightly better strategy can be "tit for tat with
forgiveness." When the opponent defects, on the next move, the player sometimes cooperates anyway, with a small
probability (around 1–5%). This allows for occasional recovery from getting trapped in a cycle of defections. The
exact probability depends on the line-up of opponents.
By analysing the top-scoring strategies, Axelrod stated several conditions necessary for a strategy to be successful.
Nice
The most important condition is that the strategy must be "nice", that is, it will not defect before its opponent
does (this is sometimes referred to as an "optimistic" algorithm). Almost all of the top-scoring strategies were
nice; therefore a purely selfish strategy will not "cheat" on its opponent, for purely self-interested reasons first.
Retaliating
However, Axelrod contended, the successful strategy must not be a blind optimist. It must sometimes retaliate.
An example of a non-retaliating strategy is Always Cooperate. This is a very bad choice, as "nasty" strategies
will ruthlessly exploit such players.
Forgiving
Successful strategies must also be forgiving. Though players will retaliate, they will once again fall back to
cooperating if the opponent does not continue to defect. This stops long runs of revenge and counter-revenge,
maximizing points.
Non-envious
The last quality is being non-envious, that is not striving to score more than the opponent (note that a "nice"
strategy can never score more than the opponent).
The optimal (points-maximizing) strategy for the one-time PD game is simply defection; as explained above, this is
true whatever the composition of opponents may be. However, in the iterated-PD game the optimal strategy depends
upon the strategies of likely opponents, and how they will react to defections and cooperations. For example,
consider a population where everyone defects every time, except for a single individual following the tit for tat
strategy. That individual is at a slight disadvantage because of the loss on the first turn. In such a population, the
optimal strategy for that individual is to defect every time. In a population with a certain percentage of
Prisoner's dilemma 122

always-defectors and the rest being tit for tat players, the optimal strategy for an individual depends on the
percentage, and on the length of the game.
A strategy called Pavlov (an example of Win-Stay, Lose-Switch) cooperates at the first iteration and whenever the
player and co-player did the same thing at the previous iteration; Pavlov defects when the player and co-player did
different things at the previous iteration. For a certain range of parameters, Pavlov beats all other strategies by giving
preferential treatment to co-players which resemble Pavlov.
Deriving the optimal strategy is generally done in two ways:
1. Bayesian Nash Equilibrium: If the statistical distribution of opposing strategies can be determined (e.g. 50% tit
for tat, 50% always cooperate) an optimal counter-strategy can be derived analytically.[3]
2. Monte Carlo simulations of populations have been made, where individuals with low scores die off, and those
with high scores reproduce (a genetic algorithm for finding an optimal strategy). The mix of algorithms in the
final population generally depends on the mix in the initial population. The introduction of mutation (random
variation during reproduction) lessens the dependency on the initial population; empirical experiments with such
systems tend to produce tit for tat players (see for instance Chess 1988), but there is no analytic proof that this
will always occur.
Although tit for tat is considered to be the most robust basic strategy, a team from Southampton University in
England (led by Professor Nicholas Jennings [4] and consisting of Rajdeep Dash, Sarvapali Ramchurn, Alex Rogers,
Perukrishnen Vytelingum) introduced a new strategy at the 20th-anniversary iterated prisoners' dilemma
competition, which proved to be more successful than tit for tat. This strategy relied on cooperation between
programs to achieve the highest number of points for a single program. The University submitted 60 programs to the
competition, which were designed to recognize each other through a series of five to ten moves at the start. Once this
recognition was made, one program would always cooperate and the other would always defect, assuring the
maximum number of points for the defector. If the program realized that it was playing a non-Southampton player, it
would continuously defect in an attempt to minimize the score of the competing program. As a result,[5] this strategy
ended up taking the top three positions in the competition, as well as a number of positions towards the bottom.
This strategy takes advantage of the fact that multiple entries were allowed in this particular competition, and that the
performance of a team was measured by that of the highest-scoring player (meaning that the use of self-sacrificing
players was a form of minmaxing). In a competition where one has control of only a single player, tit for tat is
certainly a better strategy. Because of this new rule, this competition also has little theoretical significance when
analysing single agent strategies as compared to Axelrod's seminal tournament. However, it provided the framework
for analysing how to achieve cooperative strategies in multi-agent frameworks, especially in the presence of noise. In
fact, long before this new-rules tournament was played, Richard Dawkins in his book The Selfish Gene pointed out
the possibility of such strategies winning if multiple entries were allowed, but remarked that most probably Axelrod
would not have allowed them if they had been submitted. It also relies on circumventing rules about the prisoners'
dilemma in that there is no communication allowed between the two players. When the Southampton programs
engage in an opening "ten move dance" to recognize one another, this only reinforces just how valuable
communication can be in shifting the balance of the game.

Continuous iterated prisoners' dilemma


Most work on the iterated prisoners' dilemma has focused on the discrete case, in which players either cooperate or
defect, because this model is relatively simple to analyze. However, some researchers have looked at models of the
continuous iterated prisoners' dilemma, in which players are able to make a variable contribution to the other player.
Le and Boyd[6] found that in such situations, cooperation is much harder to evolve than in the discrete iterated
prisoners' dilemma. The basic intuition for this result is straightforward: in a continuous prisoners' dilemma, if a
population starts off in a non-cooperative equilibrium, players who are only marginally more cooperative than
non-cooperators get little benefit from assorting with one another. By contrast, in a discrete prisoners' dilemma, tit
Prisoner's dilemma 123

for tat cooperators get a big payoff boost from assorting with one another in a non-cooperative equilibrium, relative
to non-cooperators. Since nature arguably offers more opportunities for variable cooperation rather than a strict
dichotomy of cooperation or defection, the continuous prisoners' dilemma may help explain why real-life examples
of tit for tat-like cooperation are extremely rare in nature (ex. Hammerstein[7] ) even though tit for tat seems robust in
theoretical models.

Morality
While it is sometimes thought that morality must involve the constraint of self-interest, David Gauthier famously
argues that co-operating in the prisoners' dilemma on moral principles is consistent with self-interest and the axioms
of game theory.[8] In his opinion, it is most prudent to give up straight-forward maximizing and instead adopt a
disposition of constrained maximization, according to which one resolves to cooperate in the belief that the opponent
will respond with the same choice, while in the classical PD it is explicitly stipulated that the response of the
opponent does not depend on the player's choice. This form of contractarianism claims that good moral thinking is
just an elevated and subtly strategic version of basic means-end reasoning.
Douglas Hofstadter expresses a strong personal belief that the mathematical symmetry is reinforced by a moral
symmetry, along the lines of the Kantian categorical imperative: defecting in the hope that the other player
cooperates is morally indefensible. If players treat each other as they would treat themselves, then they will
cooperate.

Real-life examples
These particular examples, involving prisoners and bag switching and so forth, may seem contrived, but there are in
fact many examples in human interaction as well as interactions in nature that have the same payoff matrix. The
prisoner's dilemma is therefore of interest to the social sciences such as economics, politics and sociology, as well as
to the biological sciences such as ethology and evolutionary biology. Many natural processes have been abstracted
into models in which living beings are engaged in endless games of prisoner's dilemma. This wide applicability of
the PD gives the game its substantial importance.

In politics
In political science, for instance, the PD scenario is often used to illustrate the problem of two states engaged in an
arms race. Both will reason that they have two options, either to increase military expenditure or to make an
agreement to reduce weapons. Either state will benefit from military expansion regardless of what the other state
does; therefore, they both incline towards military expansion. The paradox is that both states are acting rationally,
but producing an apparently irrational result. This could be considered a corollary to deterrence theory.

In environmental studies
In environmental studies, the PD is evident in crises such as global climate change. All countries will benefit from a
stable climate, but any single country is often hesitant to curb CO2 emissions. The immediate benefit to an individual
country to maintain current behavior is perceived to be greater than the eventual benefit to all countries if behavior
was changed, therefore explaining the current impasse concerning climate change.[9]

In psychology
In addiction research/behavioral economics, George Ainslie points out[10] that addiction can be cast as an
intertemporal PD problem between the present and future selves of the addict. In this case, defecting means
relapsing, and it is easy to see that not defecting both today and in the future is by far the best outcome, and that
defecting both today and in the future is the worst outcome. The case where one abstains today but relapses in the
Prisoner's dilemma 124

future is clearly a bad outcome—in some sense the discipline and self-sacrifice involved in abstaining today have
been "wasted" because the future relapse means that the addict is right back where he started and will have to start
over (which is quite demoralizing, and makes starting over more difficult). The final case, where one engages in the
addictive behavior today while abstaining "tomorrow" will be familiar to anyone who has struggled with an
addiction. The problem here is that (as in other PDs) there is an obvious benefit to defecting "today", but tomorrow
one will face the same PD, and the same obvious benefit will be present then, ultimately leading to an endless string
of defections.

In economics
Advertising is sometimes cited as a real life example of the prisoner’s dilemma. When cigarette advertising was legal
in the United States, competing cigarette manufacturers had to decide how much money to spend on advertising. The
effectiveness of Firm A’s advertising was partially determined by the advertising conducted by Firm B. Likewise, the
profit derived from advertising for Firm B is affected by the advertising conducted by Firm A. If both Firm A and
Firm B chose to advertise during a given period the advertising cancels out, receipts remain constant, and expenses
increase due to the cost of advertising. Both firms would benefit from a reduction in advertising. However, should
Firm B choose not to advertise, Firm A could benefit greatly by advertising. Nevertheless, the optimal amount of
advertising by one firm depends on how much advertising the other undertakes. As the best strategy is dependent on
what the other firm chooses there is no dominant strategy and this is not a prisoner's dilemma but rather is an
example of a stag hunt. The outcome is similar, though, in that both firms would be better off were they to advertise
less than in the equilibrium. Sometimes cooperative behaviors do emerge in business situations. For instance,
cigarette manufacturers endorsed the creation of laws banning cigarette advertising, understanding that this would
reduce costs and increase profits across the industry.[11] This analysis is likely to be pertinent in many other business
situations involving advertising.
Another example of the prisoner's dilemma in economics is competition-oriented objectives. [12] When firms are
aware of the activities of their competitors, they tend to pursue policies that are designed to oust their competitors as
opposed to maximizing the performance of the firm. This approach impedes the firm from functioning at its
maximum capacity because it limits the scope of the strategies employed by the firms.
Without enforceable agreements, members of a cartel are also involved in a (multi-player) prisoners' dilemma.[13]
'Cooperating' typically means keeping prices at a pre-agreed minimum level. 'Defecting' means selling under this
minimum level, instantly stealing business (and profits) from other cartel members. Anti-trust authorities want
potential cartel members to mutually defect, ensuring the lowest possible prices for consumers.

In law
The theoretical conclusion of PD is one reason why, in many countries, plea bargaining is forbidden. Often, precisely
the PD scenario applies: it is in the interest of both suspects to confess and testify against the other prisoner/suspect,
even if each is innocent of the alleged crime.

Multiplayer dilemmas
Many real-life dilemmas involve multiple players. Although metaphorical, Hardin's tragedy of the commons may be
viewed as an example of a multi-player generalization of the PD: Each villager makes a choice for personal gain or
restraint. The collective reward for unanimous (or even frequent) defection is very low payoffs (representing the
destruction of the "commons"). The commons are not always exploited: William Poundstone, in a book about the
prisoner's dilemma (see References below), describes a situation in New Zealand where newspaper boxes are left
unlocked. It is possible for people to take a paper without paying (defecting) but very few do, feeling that if they do
not pay then neither will others, destroying the system. Subsequent research by Elinor Ostrom, winner of the 2009
Nobel Prize in Economics, proved that the tragedy of the commons is oversimplified, with the negative outcome
Prisoner's dilemma 125

influenced by outside influences. Without complicating pressures, groups communicate and manage the commons
among themselves for their mutual benefit, enforcing social norms to preserve the resource and achieve the maximun
good for the group, an example of effecting the best case outcome for PD.[14] [15]

Related games

Closed-bag exchange
Hofstadter[16] once suggested that people often find problems such as the PD problem easier to understand when it is
illustrated in the form of a simple game, or trade-off. One of several examples he used was "closed bag exchange":
Two people meet and exchange closed bags, with the understanding that one of them contains money, and the
other contains a purchase. Either player can choose to honor the deal by putting into his or her bag what he or
she agreed, or he or she can defect by handing over an empty bag.
In this game, defection is always the best course, implying that rational agents will never play. However, in this case
both players cooperating and both players defecting actually give the same result, assuming there are no gains from
trade, so chances of mutual cooperation, even in repeated games, are few.

Friend or Foe?
Friend or Foe? is a game show that aired from 2002 to 2005 on the Game Show Network in the United States. It is
an example of the prisoner's dilemma game tested by real people, but in an artificial setting. On the game show, three
pairs of people compete. As each pair is eliminated, it plays a game similar to the prisoner's dilemma to determine
how the winnings are split. If they both cooperate (Friend), they share the winnings 50–50. If one cooperates and the
other defects (Foe), the defector gets all the winnings and the cooperator gets nothing. If both defect, both leave with
nothing. Notice that the payoff matrix is slightly different from the standard one given above, as the payouts for the
"both defect" and the "cooperate while the opponent defects" cases are identical. This makes the "both defect" case a
weak equilibrium, compared with being a strict equilibrium in the standard prisoner's dilemma. If you know your
opponent is going to vote Foe, then your choice does not affect your winnings. In a certain sense, Friend or Foe has
a payoff model between prisoner's dilemma and the game of Chicken.
The payoff matrix is

Cooperate Defect

Cooperate 1, 1 0, 2

Defect 2, 0 0, 0

This payoff matrix was later used on the British television programmes Shafted and Golden Balls. The latter show
has been analyzed by a team of economists. See: Split or Steal? Cooperative Behavior When the Stakes are Large.
[17]

It was also used earlier in the UK Channel 4 gameshow Trust Me, hosted by Nick Bateman, in 2000.
Prisoner's dilemma 126

Notes
[1] Dawkins, Richard (1989). The Selfish Gene. Oxford University Press. ISBN 0-19-286092-5. Page: 204 of Paperback edition
[2] Shy, O., 1996, Industrial Organization: Theory and Applications, Cambridge, Mass.: The MIT Press.
[3] For example see the 2003 study “Bayesian Nash equilibrium; a statistical test of the hypothesis” (http:/ / econ. hevra. haifa. ac. il/ ~mbengad/
seminars/ whole1. pdf) for discussion of the concept and whether it can apply in real economic or strategic situations (from Tel Aviv
University).
[4] http:/ / www. ecs. soton. ac. uk/ ~nrj
[5] The 2004 Prisoners' Dilemma Tournament Results (http:/ / www. prisoners'-dilemma. com/ results/ cec04/ ipd_cec04_full_run. html) show
University of Southampton's strategies in the first three places, despite having fewer wins and many more losses than the GRIM strategy.
(Note that in a PD tournament, the aim of the game is not to “win” matches — that can easily be achieved by frequent defection). It should
also be pointed out that even without implicit collusion between software strategies (exploited by the Southampton team) tit for tat is not
always the absolute winner of any given tournament; it would be more precise to say that its long run results over a series of tournaments
outperform its rivals. (In any one event a given strategy can be slightly better adjusted to the competition than tit for tat, but tit for tat is more
robust). The same applies for the tit for tat with forgiveness variant, and other optimal strategies: on any given day they might not 'win' against
a specific mix of counter-strategies.An alternative way of putting it is using the Darwinian ESS simulation. In such a simulation, tit for tat will
almost always come to dominate, though nasty strategies will drift in and out of the population because a tit for tat population is penetrable by
non-retaliating nice strategies, which in turn are easy prey for the nasty strategies. Richard Dawkins showed that here, no static mix of
strategies form a stable equilibrium and the system will always oscillate between bounds.
[6] Le, S. and R. Boyd (2007) "Evolutionary Dynamics of the Continuous Iterated Prisoners' Dilemma" Journal of Theoretical Biology, Volume
245, 258–267.
[7] Hammerstein, P. (2003). Why is reciprocity so rare in social animals? A protestant appeal. In: P. Hammerstein, Editor, Genetic and Cultural
Evolution of Cooperation, MIT Press. pp. 83–94.
[8] Contractarianism First published Sun Jun 18, 2000; substantive revision Wed Apr 4, 2007. (http:/ / plato. stanford. edu/ entries/
contractarianism/ #3) Stanford Encyclopedia of Philosophy.
[9] "Markets & Data" (http:/ / www. economist. com/ finance/ displaystory. cfm?story_id=9867020). The Economist. 2007-09-27. .
[10] George Ainslie (2001). Breakdown of Will. ISBN 0-521-59694-7.
[11] This argument for the development of cooperation through trust is given in The Wisdom of Crowds , where it is argued that long-distance
capitalism was able to form around a nucleus of Quakers, who always dealt honourably with their business partners. (Rather than defecting
and reneging on promises — a phenomenon that had discouraged earlier long-term unenforceable overseas contracts). It is argued that
dealings with reliable merchants allowed the meme for cooperation to spread to other traders, who spread it further until a high degree of
cooperation became a profitable strategy in general commerce
[12] J. Scott Armstrong and Kesten C. Greene (2007). "Competitor-oriented Objectives: The Myth of Market Share" (http:/ / marketing. wharton.
upenn. edu/ documents/ research/ CompOrientPDF 11-27 (2). pdf). pp. 116–134. .
[13] Nicholson, Walter (2000). Intermediate Microeconomics (8th ed.). Harcourt.
[14] http:/ / en. wikipedia. org/ wiki/ Tragedy_of_the_commons
[15] http:/ / volokh. com/ 2009/ 10/ 12/ elinor-ostrom-and-the-tragedy-of-the-commons/
[16] Hofstadter, Douglas R. (1985). Metamagical Themas: questing for the essence of mind and pattern. Bantam Dell Pub Group.
ISBN 0-465-04566-9. – see Ch.29 The Prisoner's Dilemma Computer Tournaments and the Evolution of Cooperation.
[17] http:/ / ssrn. com/ abstract=1592456

References
• Robert Aumann, “Acceptable points in general cooperative n-person games”, in R. D. Luce and A. W. Tucker
(eds.), Contributions to the Theory 23 of Games IV, Annals of Mathematics Study 40, 287–324, Princeton
University Press, Princeton NJ.
• Axelrod, R. (1984). The Evolution of Cooperation. ISBN 0-465-02121-2
• Bicchieri, Cristina (1993). Rationality and Coordination. Cambridge University Press.
• Kenneth Binmore, Fun and Games.
• David M. Chess (1988). Simulating the evolution of behavior: the iterated prisoners' dilemma problem. Complex
Systems, 2:663–670.
• Dresher, M. (1961). The Mathematics of Games of Strategy: Theory and Applications Prentice-Hall, Englewood
Cliffs, NJ.
• Flood, M.M. (1952). Some experimental games. Research memorandum RM-789. RAND Corporation, Santa
Monica, CA.
Prisoner's dilemma 127

• Kaminski, Marek M. (2004) Games Prisoners Play (http://webfiles.uci.edu/mkaminsk/www/book.html)


Princeton University Press. ISBN 0-691-11721-7
• Poundstone, W. (1992) Prisoner's Dilemma Doubleday, NY NY.
• Greif, A. (2006). Institutions and the Path to the Modern Economy: Lessons from Medieval Trade. Cambridge
University Press, Cambridge, UK.
• Rapoport, Anatol and Albert M. Chammah (1965). Prisoner's Dilemma. University of Michigan Press.
• S. Le and R. Boyd (2007) "Evolutionary Dynamics of the Continuous Iterated Prisoner's Dilemma" Journal of
Theoretical Biology, Volume 245, 258–267. Full text (http://letuhuy.bol.ucla.edu/academic/
cont_ipd_Le_Boyd_JTB.pdf)
• A. Rogers, R. K. Dash, S. D. Ramchurn, P. Vytelingum and N. R. Jennings (2007) “ Coordinating team players
within a noisy iterated Prisoner’s Dilemma tournament (http://users.ecs.soton.ac.uk/nrj/download-files/
tcs07.pdf)” Theoretical Computer Science 377 (1–3) 243–259.
• M.J. van den Assem, D. van Dolder and R.H. Thaler (2010). "Split or Steal? Cooperative Behavior When the
Stakes are Large" (http://ssrn.com/abstract=1592456)

Further reading
• Bicchieri, Cristina and Mitchell Green (1997) "Symmetry Arguments for Cooperation in the Prisoner's Dilemma",
in G. Holmstrom-Hintikka and R. Tuomela (eds.), Contemporary Action Theory: The Philosophy and Logic of
Social Action, Kluwer.
• Iterated Prisoner's Dilemma Bibliography web links (http://aleph0.clarku.edu/~djoyce/Moth/webrefs.html),
July, 2005.
• Plous, S. (1993). Prisoner's Dilemma or Perceptual Dilemma? Journal of Peace Research, Vol. 30, No. 2,
163–179.

External links
• Prisoner's Dilemma (Stanford Encyclopedia of Philosophy) (http://plato.stanford.edu/entries/
prisoner-dilemma/)
• Effects of Tryptophan Depletion on the Performance of an Iterated Prisoner's Dilemma Game in Healthy Adults
(http://www.nature.com/npp/journal/v31/n5/full/1300932a.html) – Nature Neuropsychopharmacology
• Is there a "dilemma" in Prisoner's Dilemma (http://www.egwald.ca/operationsresearch/prisonersdilemma.
php) by Elmer G. Wiens
• "Games Prisoners Play" (http://webfiles.uci.edu/mkaminsk/www/book.html) – game-theoretic analysis of
interactions among actual prisoners, including PD.
• Iterated prisoner's dilemma game (http://www.iterated-prisoners-dilemma.net/)
• Another version of the iterated prisoner's dilemma game (http://kane.me.uk/ipd/)
• Another version of the iterated prisoner's dilemma game (http://www.gametheory.net/Web/PDilemma/)
• Iterated prisoner's dilemma game (http://www.paulspages.co.uk/hmd/) applied to Big Brother TV show
situation.
• The Bowerbird's Dilemma (http://www.msri.org/ext/larryg/pages/15.htm) The Prisoner's Dilemma in
ornithology — mathematical cartoon by Larry Gonnick.
• Examples of Prisoners' dilemma (http://www.economics.li/downloads/egefdile.pdf)
• Multiplayer game based on prisoner dilemma (http://www.gohfgl.com/) Play prisoner's dilemma over IRC  —
by Axiologic Research.
• Prisoner's Dilemma Party Game (http://fortwain.com/pddg.html) A party game based on the prisoner's
dilemma
Prisoner's dilemma 128

• The Edge cites Robert Axelrod's book and discusses the success of U2 following the principles of IPD. (http://
www.rte.ie/tv/theview/archive/20080331.html)
• Classical and Quantum Contents of Solvable Game Theory on Hilbert Space (http://arxiv.org/abs/quant-ph/
0503233v2)
• Radiolab: "The Good Show" (http://www.radiolab.org/2010/dec/14/). December 14,2011. No. 1, season 9.

Traveler's dilemma
In game theory, the traveler's dilemma (sometimes abbreviated TD) is a type of non-zero-sum game in which two
players attempt to maximize their own payoff, without any concern for the other player's payoff.
The game was formulated in 1994 by Kaushik Basu and goes as follows:[1] [2]
An airline loses two suitcases belonging to two different travelers. Both suitcases happen to be identical and contain
identical items. An airline manager tasked to settle the claims of both travelers explains that the airline is liable for a
maximum of $100 per suitcase (he is unable to find out directly the price of the items), and in order to determine an
honest appraised value of the antiques the manager separates both travelers so they can't confer, and asks them to
write down the amount of their value at no less than $2 and no larger than $100. He also tells them that if both write
down the same number, he will treat that number as the true dollar value of both suitcases and reimburse both
travelers that amount. However, if one writes down a smaller number than the other, this smaller number will be
taken as the true dollar value, and both travelers will receive that amount along with a bonus/malus: $2 extra will be
paid to the traveler who wrote down the lower value and a $2 deduction will be taken from the person who wrote
down the higher amount. The challenge is: what strategy should both travelers follow to decide the value they should
write down?
Naively, one might expect a traveler's optimum choice to be $100; that is, the traveler values the antiques at the
airline manager's maximum allowed price. Remarkably, and, to many, counter-intuitively, the traveler's optimum
choice (in terms of Nash equilibrium) is in fact $2; that is, the traveler values the antiques at the airline manager's
minimum allowed price.
For an understanding of this paradoxical result, consider the following rather whimsical proof.
• Alice, having lost her antiques, is asked their value. Alice's first thought is to quote $100, the maximum
permissible value.
• On reflection, though, she realizes that her fellow traveler, Babar, might also quote $100. And so Alice changes
her mind, and decides to quote $99, which, if Babar quotes $100, will pay $101!
• But Babar, being in an identical position to Alice, might also think of quoting $99. And so Alice changes her
mind, and decides to quote $98, which, if Babar quotes $99, will pay $100! This is greater than the $99 Alice
would receive if both she and Babar quoted $99.
• This cycle of thought continues, until Alice finally decides to quote just $2 - the minimum permissible price!
Another proof goes as follows:
• If Alice only wants to maximize her own payoff, choosing $99 trumps choosing $100. If Babar chooses any dollar
value 2-98 inclusive, $99 and $100 give equal payoffs; if Babar chooses $99 or $100, choosing $99 nets Alice an
extra dollar.
• A similar line of reasoning shows that choosing $98 is always better for Alice than choosing $99. The only
situation where choosing $99 would give a higher payoff than choosing $98 is if Babar chooses $100 -- but if
Babar is only seeking to maximize his own profit, he will always choose $99 instead of $100.
• This line of reasoning can be applied to all of Alice's whole-dollar options until she finally reaches $2, the lowest
price.
Traveler's dilemma 129

The ($2, $2) outcome in this instance is the Nash equilibrium of the game. However, when the game is played
experimentally, most participants select the value $100 or a value close to $100, including both those who have not
thought through the logic of the decision and those who understand themselves to be making a non-rational choice.
Furthermore, the travelers are rewarded by deviating strongly from the Nash equilibrium in the game and obtain
much higher rewards than would be realized with the purely rational strategy. These experiments (and others, such as
focal points) show that the majority of people do not use purely rational strategies, but the strategies they do use are
demonstrably optimal. This paradox has led some to question the value of game theory in general, while others have
suggested that a new kind of reasoning is required to understand how it can be quite rational ultimately to make
non-rational choices. Note that the $100 choice here is the optimal pure strategy under a different model of decision
making called superrationality, which assumes that all logical thinkers must use the same strategy, so pure
superrational strategies are restricted to the diagonal of the payoff matrix.
One variation of the original traveler's dilemma in which both travelers are offered only two integer choices, $2 or
$3, is identical mathematically to the Prisoner's dilemma and thus the traveler's dilemma can be viewed as an
extension of prisoner's dilemma. The traveler's dilemma is also related to the game Guess 2/3 of the average in that
both involve deep iterative deletion of dominated strategies in order to demonstrate the Nash equilibrium, and that
both lead to experimental results that deviate markedly from the game-theoretical predictions.

Payoff matrix
The canonical payoff matrix is shown below (if only integer inputs are taken into account):

Canonical TD payoff matrix


100 99 98 97 ⋯ 3 2

100 100, 100 97, 101 96, 100 95, 99 ⋯ 1, 5 0, 4

99 101, 97 99, 99 96, 100 95, 99 ⋯ 1, 5 0, 4

98 100, 96 100, 96 98, 98 95, 99 ⋯ 1, 5 0, 4

97 99, 95 99, 95 99, 95 97, 97 ⋯ 1, 5 0, 4

⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮

3 5, 1 5, 1 5, 1 5, 1 ⋯ 3, 3 0, 4

2 4, 0 4, 0 4, 0 4, 0 ⋯ 4, 0 2, 2

Denoting by the set of strategies available to both players and by the


payoff function of one of them we can write

(Note that the other player receives since the game is quantitatively symmetric).

References
[1] Kaushik Basu, "The Traveler's Dilemma: Paradoxes of Rationality in Game Theory"; American Economic Review, Vol. 84, No. 2, pages
391-395; May 1994.
[2] Kaushik Basu, "The Traveler's Dilemma" (http:/ / www. sciam. com/ article. cfm?chanID=sa006& colID=1&
articleID=7750A576-E7F2-99DF-3824E0B1C2540D47); Scientific American Magazine, June 2007
Coordination game 130

Coordination game
In game theory, coordination games are a class of games with multiple pure strategy Nash equilibria in which
players choose the same or corresponding strategies. Coordination games are a formalization of the idea of a
coordination problem, which is widespread in the social sciences, including economics, meaning situations in
which all parties can realize mutual gains, but only by making mutually consistent decisions. A common application
is the choice of technological standards.
For a classic example of a coordination game, consider the 2-player, 2-strategy game, with the payoff matrix shown
on the right (Fig. 1).

Left Right

Up A, a C, c
Down B, b D, d
Fig. 1: 2-player coordination
game

If this game is a coordination game, then the following inequalities in payoffs hold for player 1 (rows):
A > B, D > C, and for player 2 (columns): a > c, d > b. In this game the strategy profiles {Left, Up} and
{Right, Down} are pure Nash equilibria, marked in gray. This setup can be extended for more than two strategies,
where strategies are usually sorted so that the Nash equilibria are in the diagonal from top left to bottom right, as
well as game with more than two players.

Examples
A typical case for a coordination game is choosing the side of the road upon which to drive, a social standard which
can save lives if it is widely adhered to. In a simplified example, assume that two drivers meet on a narrow dirt road.
Both have to swerve in order to avoid a head-on collision. If both execute the same swerving maneuver they will
manage to pass each other, but if they choose differing maneuvers they will collide. In the payoff matrix in Fig. 2,
successful passing is represented by a payoff of 10, and a collision by a payoff of 0.
In this case there are two pure Nash equilibria: either both swerve to the left, or both swerve to the right. In this
example, it doesn't matter which side both players pick, as long as they both pick the same. Both solutions are Pareto
efficient. This is not true for all coordination games, as the pure coordination game in Fig. 3 shows. Pure (or
common interest) coordination is the game where the players both prefer the same Nash equilibrium outcome, here
both players prefer partying over both staying at home to watch TV. The {Party, Party} outcome Pareto dominates
the {Home, Home} outcome, just as both Pareto dominate the other two outcomes, {Party, Home} and {Home,
Party}.

Left Right Party Home

Left 10, 0, Party 10, 0, 0


10 0 10
Right 0, 10, Home 0, 0 5, 5
0 10 Fig. 3: Pure coordination
Fig. 2: Choosing game
sides
Coordination game 131

Party Home Stag Hare

Party 10, 0, 0 Stag 10, 0,


5 10 7
Home 0, 0 5, Hare 7, 7,
10 0 7
Fig. 4: Battle of the Fig. 5: Stag hunt
sexes

This is different in another type of coordination game commonly called battle of the sexes (or conflicting interest
coordination), as seen in Fig. 4. In this game both players prefer engaging in the same activity over going alone, but
their preferences differ over which activity they should engage in. Player 1 prefers that they both party while player
2 prefers that they both stay at home.
Finally, the stag hunt game in Fig. 5 shows a situation in which both players (hunters) can benefit if they cooperate
(hunting a stag). However, cooperation might fail, because each hunter has an alternative which is safer because it
does not require cooperation to succeed (hunting a hare). This example of the potential conflict between safety and
social cooperation is originally due to Jean-Jacques Rousseau.

Mixed Nash equilibrium


Coordination games also have mixed strategy Nash equilibria. In the generic coordination game above, a mixed
Nash equilibrium is given by probabilities p = (d-b)/(a+d-b-c) to play Up and 1-p to play Down for player 1, and q =
(D-C)/(A+D-B-C) to play Left and 1-q to play Right for player 2. Since d > b and d-b < a+d-b-c, p is always between
zero and one, so existence is assured (similarly for q).
The reaction correspondences for 2×2 coordination games are shown in Fig. 6.
The pure Nash equilibria are the points in the bottom left and top right corners of the strategy space, while the mixed
Nash equilibrium lies in the middle, at the intersection of the dashed lines.
Unlike the pure Nash equilibria, the mixed equilibrium is not an evolutionarily stable strategy (ESS). The mixed
Nash equilibrium is also Pareto dominated by the two pure Nash equilibria (since the players will fail to coordinate
with non-zero probability), a quandary that led Robert Aumann to propose the refinement of a correlated
equilibrium.

Coordination and equilibrium selection


Games like the driving example above have illustrated the need for solution to
coordination problems. Often we are confronted with circumstances where we
must solve coordination problems without the ability to communicate with
our partner. Many authors have suggested that particular equilibria are focal
for one reason or another. For instance, some equilibria may give higher
payoffs, be naturally more salient, may be more fair, or may be safer.
Sometimes these refinements conflict, which makes certain coordination
games especially complicated and interesting (e.g. the Stag hunt, in which
{Stag,Stag} has higher payoffs, but {Hare,Hare} is safer).
Fig.6 - Reaction correspondence for 2x2
coordination games. Nash equilibria
shown with points, where the two
player's correspondences agree, i.e. cross
Coordination game 132

Other games with externalities


Coordination games are closely linked to the economic concept of externalities, and in particular positive network
externalities, the benefit reaped from being in the same network as other agents. Conversely, game theorists have
modeled behavior under negative externalities where choosing the same action creates a cost rather than a benefit.
The generic term for this class of game is anti-coordination game. The best-known example of a 2-player
anti-coordination game is the game of Chicken (also known as Hawk-Dove game). Using the payoff matrix in Figure
1, a game is an anti-coordination game if B > A and C > D for row-player 1 (with lowercase analogues for
column-player 2). {Down, Left} and {Up, Right} are the two pure Nash equilibria. Chicken also requires that A > C,
so a change from {Up, Left} to {Up, Right} improves player 2's payoff but reduces player 1's payoff, introducing
conflict. This counters the standard coordination game setup, where all unilateral changes in a strategy lead to either
mutual gain or mutual loss.
The concept of anti-coordination games has been extended to multi-player situation. A crowding game is defined as
a game where each player's payoff is non-increasing over the number of other players choosing the same strategy
(i.e., a game with negative network externalities). For instance, a driver could take U.S. Route 101 or Interstate 280
from San Francisco to San Jose. While 101 is shorter, 280 is considered more scenic, so drivers might have different
preferences between the two independent of the traffic flow. But each additional car on either route will slightly
increase the drive time on that route, so additional traffic creates negative network externalities, and even
scenery-minded drivers might opt to take 101 if 280 becomes too crowded. A congestion game is a crowding game
in networks. The minority game is a game where the only objective for all players is to be part of smaller of two
groups. A well-known example of the minority game is the El Farol Bar problem proposed by W. Brian Arthur.
A hybrid form of coordination and anti-coordination is the discoordination game, where one player's incentive is to
coordinate while the other player tries to avoid this. Discoordination games have no pure Nash equilibria. In Figure
1, choosing payoffs so that A > B, D < C, while a < b, c > d, creates a discoordination game. In each of the four
possible states either player 1 or player 2 are better off by switching their strategy, so the only Nash equilirium is
mixed. The canonical example of a discoordination game is the matching pennies game.

References
• Russell Cooper: Coordination Games, Cambridge: Cambridge University Press, 1998 (ISBN 0-521-57896-5).
• Avinash Dixit & Barry Nalebuff: Thinking Strategically, New York: Norton, 1991 (ISBN 0-393-32946-1).
• Robert Gibbons: Game Theory for Applied Economists, Princeton, New Jersey: Princeton University Press, 1992
(ISBN 0-691-00395-5).
• David Kellogg Lewis: Convention: A Philosophical Study, Oxford: Blackwell, 1969 (ISBN 0-631-23257-5).
• Martin J. Osborne & Ariel Rubinstein: A Course in Game Theory, Cambridge, Massachusetts: MIT Press, 1994
(ISBN 0-262-65040-1).
• Thomas Schelling: The Strategy of Conflict, Cambridge, Massachusetts: Harvard University Press, 1960 (ISBN
0-674-84031-3).
• Thomas Schelling: Micromotives and Macrobehavior, New York: Norton, 1978 (ISBN 0-393-32946-1).
• Edna Ullmann-Margalit: The Emergence of Norms, Oxford Un. Press, 1977. (or Clarendon Press 1978).
• Adrian Piper: review of 'The Emergence of Norms' [1] in The Philosophical Review, vol. 97, 1988, pp. 99-107.
Coordination game 133

References
[1] http:/ / links. jstor. org/ sici?sici=0031-8108(198801)97%3A1%3C99%3ATEON%3E2. 0. CO%3B2-0

Chicken
The game of chicken, also known as the hawk-dove or snowdrift[1] game, is an influential model of conflict for two
players in game theory. The principle of the game is that while each player prefers not to yield to the other, the worst
possible outcome occurs when both players do not yield.
The name "chicken" has its origins in a game in which two drivers drive towards each other on a collision course:
one must swerve, or both may die in the crash, but if one driver swerves and the other does not, the one who swerved
will be called a "chicken," meaning a coward; this terminology is most prevalent in political science and economics.
The name "Hawk-Dove" refers to a situation in which there is a competition for a shared resource and the contestants
can choose either conciliation or conflict; this terminology is most commonly used in biology and evolutionary game
theory. From a game-theoretic point of view, "chicken" and "hawk-dove" are identical; the different names stem
from parallel development of the basic principles in different research areas.[2] The game has also been used to
describe the mutual assured destruction of nuclear warfare, especially the sort of brinkmanship involved in the
Cuban Missile Crisis.[3]

Popular versions
The game of chicken models two drivers, both headed for a single lane bridge from opposite directions. The first to
swerve away yields the bridge to the other. If neither player swerves, the result is a costly deadlock in the middle of
the bridge, or a potentially fatal head-on collision. It is presumed that the best thing for each driver is to stay straight
while the other swerves (since the other is the "chicken" while a crash is avoided). Additionally, a crash is presumed
to be the worst outcome for both players. This yields a situation where each player, in attempting to secure his best
outcome, risks the worst.
The phrase game of chicken is also used as a metaphor for a situation where two parties engage in a showdown
where they have nothing to gain, and only pride stops them from backing down. Bertrand Russell famously
compared the game of Chicken to nuclear brinkmanship:
Since the nuclear stalemate became apparent, the Governments of East and West have adopted the
policy which Mr. Dulles calls 'brinkmanship'. This is a policy adapted from a sport which, I am told, is
practiced by some youthful degenerates. This sport is called 'Chicken!'. It is played by choosing a long
straight road with a white line down the middle and starting two very fast cars towards each other from
opposite ends. Each car is expected to keep the wheels of one side on the white line. As they approach
each other, mutual destruction becomes more and more imminent. If one of them swerves from the
white line before the other, the other, as he passes, shouts 'Chicken!', and the one who has swerved
becomes an object of contempt. As played by irresponsible boys, this game is considered decadent and
immoral, though only the lives of the players are risked. But when the game is played by eminent
statesmen, who risk not only their own lives but those of many hundreds of millions of human beings, it
is thought on both sides that the statesmen on one side are displaying a high degree of wisdom and
courage, and only the statesmen on the other side are reprehensible. This, of course, is absurd. Both are
to blame for playing such an incredibly dangerous game. The game may be played without misfortune a
few times, but sooner or later it will come to be felt that loss of face is more dreadful than nuclear
annihilation. The moment will come when neither side can face the derisive cry of 'Chicken!' from the
other side. When that moment is come, the statesmen of both sides will plunge the world into
destruction.[3]
Chicken 134

Brinkmanship involves the introduction of an element of uncontrollable risk: even if all players act rationally in the
face of risk, uncontrollable events can still trigger the catastrophic outcome.[4] In the "chickie run" scene from the
film Rebel Without a Cause, this happens when Corey Allen's character cannot escape from the car and dies in the
crash. The opposite scenario occurs in Footloose where Ren McCormack is stuck in his tractor and hence wins the
game as he can't play "chicken". The basic game-theoretic formulation of Chicken has no element of variable,
potentially catastrophic, risk, and is also the contraction of a dynamic situation into a one-shot interaction.
The hawk-dove version of the game imagines two players (animals) contesting an indivisible resource who can
choose between two strategies, one more escalated than the other.[5] They can use threat displays (play Dove), or
physically attack each other (play Hawk). If both players choose the Hawk strategy, then they fight until one is
injured and the other wins. If only one player chooses Hawk, then this player defeats the Dove player. If both players
play Dove, there is a tie, and each player receives a payoff lower than the profit of a hawk defeating a dove.

Game theoretic applications

Chicken

Swerve Straight

Swerve Tie, TieLose, Win
StraightWin, Lose
Crash, Crash
Fig. 1: A payoff matrix of
Chicken

Swerve Straight

Swerve 0, 0 -1, +1


Straight +1, -1
-10, -10

Fig. 2: Chicken with numerical


payoffs

A formal version of the game of Chicken has been the subject of serious research in game theory.[6] Two versions of
the payoff matrix for this game are presented here (Figures 1 and 2). In Figure 1 the outcomes are represented in
words, where each player would prefer to win over tying, prefer to tie over losing, and prefer to lose over crashing.
Figure 2 presents arbitrarily set numerical payoffs which theoretically conform to this situation. Here the benefit of
winning is 1, the cost of losing is -1, and the cost of crashing is -10.
Both Chicken and Hawk-Dove are anti-coordination games, in which it is mutually beneficial for the players to play
different strategies. In this way it can be thought of as the opposite of a coordination game, where playing the same
strategy Pareto dominates playing different strategies. The underlying concept is that players use a shared resource.
In coordination games, sharing the resource creates a benefit for all: the resource is non-rivalrous, and the shared
usage creates positive externalities. In anti-coordination games the resource is rivalrous but non-excludable and
sharing comes at a cost (or negative externality).
Because the loss of swerving is so trivial compared to the crash that occurs if nobody swerves, the reasonable
strategy would seem to be to swerve before a crash is likely. Yet, knowing this, if one believes one's opponent to be
reasonable, one may well decide not to swerve at all, in the belief that he will be reasonable and decide to swerve,
leaving the other player the winner. This unstable situation can be formalized by saying there is more than one Nash
equilibrium, which is a pair of strategies for which neither player gains by changing his own strategy while the other
Chicken 135

stays the same. (In this case, the pure strategy equilibria are the two situations wherein one player swerves while the
other does not.)

Hawk-Dove

Hawk Dove

Hawk (V−C)/2, V, 0
(V−C)/2
Dove 0, V V/2,
V/2
Fig. 3: Hawk-Dove game

Hawk Dove

Hawk X, X W, L
Dove L, W T, T
Fig. 4: General Hawk-Dove
game

In the biological literature, this game is referred to as Hawk-Dove. The earliest presentation of a form of the
Hawk-Dove game was by John Maynard Smith and George Price in their 1973 Nature paper, "The logic of animal
conflict".[7] The traditional [5] [8] payoff matrix for the Hawk-Dove game is given in Figure 3, where V is the value
of the contested resource, and C is the cost of an escalated fight. It is (almost always) assumed that the value of the
resource is less than the cost of a fight, i.e., C > V > 0. If C ≤ V, the resulting game is not a game of Chicken.
The exact value of the Dove vs. Dove playoff varies between model formulations. Sometimes the players are
assumed to split the payoff equally (V/2 each), other times the payoff is assumed to be zero (since this is the
expected payoff to a war of attrition game, which is the presumed models for a contest decided by display duration).
While the Hawk-Dove game is typically taught and discussed with the payoffs in terms of V and C, the solutions
hold true for any matrix with the payoffs in Figure 4, where W > T > L > X.[8]

Hawk-Dove variants
Biologists have explored modified versions of classic Hawk-Dove game to investigate a number of biologically
relevant factors. These include adding variation in resource holding potential, and differences in the value of winning
to the different players,[9] allowing the players to threaten each other before choosing moves in the game,[10] and
extending the interaction to two plays of the game.[11]

Pre-commitment
One tactic in the game is for one party to signal their intentions convincingly before the game begins. For example, if
one party were to ostensibly disable their steering wheel just before the match, the other party would be compelled to
swerve.[12] This shows that, in some circumstances, reducing one's own options can be a good strategy. One
real-world example is a protester who handcuffs himself to an object, so that no threat can be made which would
compel him to move (since he cannot move). Another example, taken from fiction, is found in Stanley Kubrick's Dr.
Strangelove. In that film, the Russians sought to deter American attack by building a "doomsday machine," a device
that would trigger world annihilation if Russia was hit by nuclear weapons or if any attempt were made to disarm it.
However, the Russians failed to signal — they deployed their doomsday machine covertly.
Players may also make non-binding threats to not swerve. This has been modeled explicitly in the Hawk-Dove game.
Such threats work, but must be wastefully costly if the threat is one of two possible signals ("I will not swerve"/"I
Chicken 136

will swerve"), or they will be costless if there are three or more signals (in which case the signals will function as a
game of "Rock, Paper, Scissors").[10]

Best response mapping and Nash equilibria

Fig.5 - Reaction correspondences for both players in a discoordination game. Compare with replicator dynamic vector fields below

All anti-coordination games have three Nash equilibria. Two of these are pure contingent strategy profiles, in which
each player plays one of the pair of strategies, and the other player chooses the opposite strategy. The third one is a
mixed equilibrium, in which each player probabilistically chooses between the two pure strategies. Either the pure,
or mixed, Nash equilibria will be evolutionarily stable strategies depending upon whether uncorrelated asymmetries
exist.
The best response mapping for all 2x2 anti-coordination games is shown in Figure 5. The variables x and y in Figure
5 are the probabilities of playing the escalated strategy ("Hawk" or "Don't swerve") for players X and Y respectively.
The line in graph on the left shows the optimum probability of playing the escalated strategy for player Y as a
function of x. The line in the second graph shows the optimum probability of playing the escalated strategy for
player X as a function of y (the axes have not been rotated, so the dependent variable is plotted on the abscissa, and
the independent variable is plotted on the ordinate). The Nash equilibria are where the players' correspondences
agree, i.e., cross. These are shown with points in the right hand graph. The best response mappings agree (i.e., cross)
at three points. The first two Nash equilibria are in the top left and bottom right corners, where one player chooses
one strategy, the other player chooses the opposite strategy. The third Nash equilibrium is a mixed strategy which
lies along the diagonal from the bottom left to top right corners. If the players do not know which one of them is
which, then the mixed Nash is an evolutionarily stable strategy (ESS), as play is confined to the bottom left to top
right diagonal line. Otherwise an uncorrelated asymmetry is said to exist, and the corner Nash equilibria are ESSes.

Strategy polymorphism vs strategy mixing


The ESS for the Hawk-Dove game is a mixed strategy. Formal game theory is indifferent to whether this mixture is
due to all players in a population choosing randomly between the two pure strategies (a range of possible instinctive
reactions for a single situation) or whether the population is a polymorphic mixture of players dedicated to choosing
a particular pure strategy(a single reaction differing from individual to individual). Biologically, these two options
are strikingly different ideas. The Hawk-Dove game has been used as a basis for evolutionary simulations to explore
which of these two modes of mixing ought to predominate in reality.[13]
Chicken 137

Symmetry breaking
In both "Chicken" and "Hawk-Dove", the only symmetric Nash equilibrium is the mixed strategy Nash equilibrium,
where both individuals randomly chose between playing Hawk/Straight or Dove/Swerve. This mixed strategy
equilibrium is often sub-optimal — both players would do better if they could coordinate their actions in some way.
This observation has been made independently in two different contexts, with almost identical results.[14]

Correlated equilibrium and Chicken

Dare Chicken

Dare 0,0 7,2


Chicken 2,7 6,6
Fig. 6: A version of
Chicken

Consider the version of "Chicken" pictured in Figure 6. Like all forms of the game, there are three Nash equilibria.
The two pure strategy Nash equilibria are (D, C) and (C, D). There is also a mixed strategy equilibrium where each
player Dares with probability 1/3. It results in expected payoffs of 14/3 = 4.667 for each player.
Now consider a third party (or some natural event) that draws one of three cards labeled: (C, C), (D, C), and (C, D).
This exogenous draw event is assumed to be uniformly at random over the 3 outcomes. After drawing the card the
third party informs the players of the strategy assigned to them on the card (but not the strategy assigned to their
opponent). Suppose a player is assigned D, he would not want to deviate supposing the other player played their
assigned strategy since he will get 7 (the highest payoff possible). Suppose a player is assigned C. Then the other
player has been assigned C with probability 1/2 and D with probability 1/2 (due to the nature of the exogenous
draw). The expected utility of Daring is 0(1/2) + 7(1/2) = 3.5 and the expected utility of chickening out is 2(1/2) +
6(1/2) = 4. So, the player would prefer to chicken out.
Since neither player has an incentive to deviate from the drawn assignments, this probability distribution over the
strategies is known as a correlated equilibrium of the game. Notably, the expected payoff for this equilibrium is
7(1/3) + 2(1/3) + 6(1/3) = 5 which is higher than the expected payoff of the mixed strategy Nash equilibrium.

Uncorrelated asymmetries and solutions to the Hawk-Dove game


Although there are three Nash equilibria in the Hawk-Dove game, the one which emerges as the evolutionarily stable
strategy (ESS) depends upon the existence of any uncorrelated asymmetry in the game (in the sense of
anti-coordination games). In order for row players to choose one strategy and column players the other, the players
must be able to distinguish which role (column or row player) they have. If no such uncorrelated asymmetry exists
then both players must choose the same strategy, and the ESS will be the mixing Nash equilibrium. If there is an
uncorrelated asymmetry, then the mixing Nash is not an ESS, but the two pure, role contingent, Nash equilibria are.
The standard biological interpretation of this uncorrelated asymmetry is that one player is the territory owner, while
the other is an intruder on the territory. In most cases, the territory owner plays Hawk while the intruder plays Dove.
In this sense, the evolution of strategies in Hawk-Dove can be seen as the evolution of a sort of prototypical version
of ownership. Game-theoretically, however, there is nothing special about this solution. The opposite solution —
where the owner plays dove and the intruder plays Hawk — is equally stable. In fact, this solution is present in a
certain species of spider; when an invader appears the occupying spider leaves. In order to explain the prevalence of
property rights over "anti-property rights" one must discover a way to break this additional symmetry.[14]
Chicken 138

Replicator dynamics
Replicator dynamics is a simple model of strategy change
commonly used in evolutionary game theory. In this model, a
strategy which does better than the average increases in frequency
at the expense of strategies that do worse than the average. There
are two versions of the replicator dynamics. In one version, there
is a single population which plays against itself. In another, there
are two population models where each population only plays
against the other population (and not against itself).

In the one population model, the only stable state is the mixed
strategy Nash equilibrium. Every initial population proportion
(except all Hawk and all Dove) converge to the mixed strategy
Nash Equilibrium where part of the population plays Hawk and
part of the population plays Dove. (This occurs because the only Fig 7a: Vector field for two population replicator
ESS is the mixed strategy equilibrium.) In the two population dynamics and Hawk-Dove

model, this mixed point becomes unstable. In fact, the only stable
states in the two population model correspond to the pure strategy equilibria, where one population is composed of
all Hawks and the other of all Doves. In this model one population becomes the aggressive population while the
other becomes passive. This model is illustrated by the vector field pictured in Figure 7a. The one dimensional
vector field of the single population model (Figure 7b) corresponds to the bottom left to top right diagonal of the two
population model.

Fig. 7b: Vector field for single population replicator dynamics

The single population model presents a situation where no uncorrelated asymmetries exist, and so the best players
can do is randomize their strategies. The two population models provide such an asymmetry and the members of
each population will then use that to correlate their strategies. In the two population model, one population gains at
the expense of another. Hawk-Dove and Chicken thus illustrate an interesting case where the qualitative results for
the two different version of the replicator dynamics differ wildly.[15]

Related games

Brinkmanship
"Chicken" and "Brinkmanship" are often used synonymously in the context of conflict, but in the strict
game-theoretic sense, "brinkmanship" refers to a strategic move designed to avert the possibility of the opponent
switching to aggressive behavior. The move involves a credible threat of the risk of irrational behavior in the face of
aggression. If player 1 unilaterally moves to A, a rational player 2 cannot retaliate since (A, C) is preferable to
(A, A). Only if player 1 has grounds to believe that there is sufficient risk that player 2 responds irrationally (usually
by giving up control over the response, so that there is sufficient risk that player 2 responds with A) player 1 will
retract and agree on the compromise.
Chicken 139

War of attrition
Like "Chicken", the "War of attrition" game models escalation of conflict, but they differ in the form in which the
conflict can escalate. Chicken models a situation in which the catastrophic outcome differs in kind from the
agreeable outcome, e.g., if the conflict is over life and death. War of attrition models a situation in which the
outcomes differ only in degrees, such as a boxing match in which the contestants have to decide whether the ultimate
prize of victory is worth the ongoing cost of deteriorating health and stamina.

Schedule Chicken & Project Management


The term "Schedule Chicken"[16] is used in project management and software development circles. The condition
occurs when two or more areas of a product team claim they can deliver features at an unrealistically early date
because each assumes the other teams are stretching the predictions even more than they are. This pretense
continually moves forward past one project checkpoint to the next until feature integration begins or just before the
functionality is actually due.
The practice of "Schedule Chicken"[17] often results in contagious schedules slips due to the inter-team dependencies
and is difficult to identify and resolve, as it is in the best interest of each team not to be the first bearer of bad news.
The psychological drivers underlining the "Schedule Chicken" behavior in many ways mimic the Hawk-Dove or
Snowdrift model of conflict.

Notes
[1] 'Snowdrift' game tops 'Prisoner's Dilemma' in explaining cooperation (http:/ / www. physorg. com/ news111145481. html)
[2] Osborne and Rubenstein (1994) p. 30.
[3] Russell (1959) p. 30.
[4] Dixit and Nalebuff (1991) pp. 205–222.
[5] Maynard Smith and Parker (1976).
[6] Rapoport and Chammah (1966) pp. 10–14 and 23–28.
[7] Maynard Smith and Price (1973).
[8] Maynard Smith (1982).
[9] Hammerstein (1981).
[10] Kim (1995).
[11] Cressman (1995).
[12] Kahn (1965), cited in Rapoport and Chammah (1966)
[13] Bergstrom and Goddfrey-Smith (1998)
[14] Skyrms (1996) pp. 76–79.
[15] Weibull (1995) pp. 183–184.
[16] Rising, L: The Patterns Handbook: Techniques, Strategies, and Applications, page 169. Cambridge University Press, 1998.
[17] Beck, K and Fowler, M: Planning Extreme Programming, page 33. Safari Tech Books, 2000.

References
• Bergstrom, C. T. and Godfrey-Smith, P. (1998). "On the evolution of behavioral heterogeneity in individuals and
populations". Biology and Philosophy 13 (2): 205–231. doi:10.1023/A:1006588918909.
• Cressman, R. (1995). "Evolutionary Stability for Two-stage Hawk-Dove Games". Rocky Mountain Journal of
Mathematics 25: 145–155. doi:10.1216/rmjm/1181072273.
• Deutsch, M. (1974). The Resolution of Conflict: Constructive and Destructive Processes. Yale University Press,
New Haven. ISBN 978-0300016833.
• Dixit, A.K. and Nalebuff, B.J. (1991). Thinking Strategically. W.W. Norton. ISBN 0393310353.
• Fink, E.C., Gates, S., Humes, B.D. (1998). Game Theory Topics: Incomplete Information, Repeated Games, and
N-Player Games. Sage. ISBN 0761910166.
• Hammerstein, P. (1981). "The Role of Asymmetries in Animal Contests". Animal Behavior 29: 193–205.
doi:10.1016/S0003-3472(81)80166-2.
Chicken 140

• Kahn, H. (1965). On escalation: metaphors and scenarios. Praeger Publ. Co., New York. ISBN 978-0313251634.
• Kim, Y-G. (1995). "Status signaling games in animal contests". Journal of Theoretical Biology 176 (2): 221–231.
doi:10.1006/jtbi.1995.0193. PMID 7475112.
• Osborne, M.J. and Rubenstein, A. (1994). A course in game theory. MIT press. ISBN 0-262-65040-1.
• Maynard Smith, J. (1982). Evolution and the Theory of Games. Cambridge University Press.
ISBN 978-0521288842.
• Maynard Smith, J. and Parker, G.A. (1976). "The logic of asymmetric contests". Animal Behaviour 24: 159–175.
doi:10.1016/S0003-3472(76)80110-8.
• Maynard Smith, J. and Price, G.R. (1973). "The logic of animal conflict". Nature 246 (5427): 15–18.
Bibcode 1973Natur.246...15S. doi:10.1038/246015a0.
• Moore, C.W. (1986). The Mediation Process: Practical Strategies for Resolving Conflict. Jossey-Bass, San
Francisco. ISBN 978-0875896731.
• Rapoport, A. and Chammah, A.M. (1966). "The Game of Chicken". American Behavioral Scientist 10.
• Russell, B.W. (1959). Common Sense and Nuclear Warfare. George Allen and Unwin, London.
ISBN 0041720032.
• Skyrms, Brian (1996). Evolution of the Social Contract. New York: Cambridge University Press.
ISBN 0521555833.
• Weibull, Jörgen W. (1995). Evolutionary Game Theory. Cambridge, MA: MIT Press. ISBN 0-262-23181-6.

External links
• The game of Chicken as a metaphor for human conflict (http://www.heretical.com/games/chicken.html)
• Game-theoretic analysis of Chicken (http://www.gametheory.net/Dictionary/Games/GameofChicken.html)
• Game of Chicken – Rebel Without a Cause (http://www.egwald.ca/operationsresearch/chickengame.php) by
Elmer G. Wiens.
• David M. Dikel, David Kane, James R. Wilson (2001). Software Architecture: Organizational Principles and
Patterns, University of Michigan, ISBN 9780130290328
• Michael Ficco (2001). What Every Engineer Should Know about Career Management, CRC Press, ISBN
9781420076820
• David M. Dikel, David Kane, James R. Wilson (2002). Software Craftsmanship: The New Imperative,
Addison-Wesley, ISBN 9780130290328
• Online model: Expected Dynamics of an Imitation Model in the Hawk-Dove Game (http://demonstrations.
wolfram.com/ExpectedDynamicsOfAnImitationModelInTheHawkDoveGame/)
• Online model: Expected Dynamics of an Intra-Population Imitation Model in the Two-Population Hawk-Dove
Game (http://demonstrations.wolfram.com/
ExpectedDynamicsOfAnIntraPopulationImitationModelInTheTwoPop/)
Centipede game 141

Centipede game
In game theory, the centipede game, first introduced
by Rosenthal (1981), is an extensive form game in
which two players take turns choosing either to take a
slightly larger share of a slowly increasing pot, or to
pass the pot to the other player. The payoffs are
arranged so that if one passes the pot to one's opponent
Extensive Form Representation of a Four Stage "Centipede" Game
and the opponent takes the pot on the next round, one
receives slightly less than if one had taken the pot on
this round. Although the traditional centipede game had a limit of 100 rounds (hence the name), any game with this
structure but a different number of rounds is called a centipede game. The unique subgame perfect equilibrium (and
every Nash equilibrium) of these games indicates that the first player take the pot on the very first round of the game;
however in empirical tests relatively few players do so, and as a result achieve a higher payoff than the payoff
predicted by the equilibria analysis. These results are taken to show that subgame perfect equilibria and Nash
equilibria fail to predict human play in some circumstances. The Centipede game is commonly used in introductory
game theory courses and texts to highlight the concept of backward induction and the iterated elimination of
dominated strategies, which show a standard way of providing a solution to the game.

Play
One possible version of a centipede game could be played as follows:
Consider two players: Alice and Bob. Alice moves first. At the start of the game, Alice has two piles of coins in front
of her: one pile contains 4 coins and the other pile contains 1 coin. Each player has two moves available: either
"take" the larger pile of coins and give the smaller pile to the other player or "push" both piles across the table to the
other player. Each time the piles of coins pass across the table, the quantity of coins in each pile doubles. For
example, assume that Alice chooses to "push" the piles on her first move, handing the piles of 1 and 4 coins over to
Bob, doubling them to 2 and 8. Bob could now use his first move to either "take" the pile of 8 coins and give 2 coins
to Alice, or he can "push" the two piles back across the table again to Alice, again increasing the size of the piles to 4
and 16 coins. The game continues for a fixed number of rounds or until a player decides to end the game by
pocketing a pile of coins.
The addition of coins is taken to be an externality, as it is not contributed by either player.
A second possible version of the centipede game is represented in the diagram above. In this version, passing the
coins across the table is represented by a move of R (going across the row of the lattice, sometimes also represented
by A for across) and pocketing the coins is a move D (down the lattice). The numbers 1 and 2 along the top of the
diagram show the alternating decision-maker between two players denoted here as 1 and 2, and the numbers at the
bottom of each branch show the payout for players 1 and 2 respectively.

Equilibrium analysis and backward induction


Standard game theoretic tools predict that the first player will defect on the first round, taking the pile of coins for
himself. In the centipede game, a Pure strategy consists of a set of actions (one for each choice point in the game,
even though some of these choice points may never be reached) and a Mixed strategy is a probability distribution
over the possible pure strategies. There are several pure strategy Nash equilibria of the centipede game and infinitely
many mixed strategy Nash equilibria. However, there is only one subgame perfect equilibrium (a popular refinement
to the Nash equilibrium concept).
Centipede game 142

In the unique subgame perfect equilibrium, each player chooses to defect at every opportunity. This, of course,
means defection at the first stage. In the Nash equilibria, however, the actions that would be taken after the initial
choice opportunities (even though they are never reached since the first player defects immediately) may be
cooperative.
Defection by the first player is the unique subgame perfect equilibrium and required by any Nash equilibrium, it can
be established by backward induction. Suppose two players reach the final round of the game; the second player will
do better by defecting and taking a slightly larger share of the pot. Since we suppose the second player will defect,
the first player does better by defecting in the second to last round, taking a slightly higher payoff than she would
have received by allowing the second player to defect in the last round. But knowing this, the second player ought to
defect in the third to last round, taking a slightly higher payoff than she would have received by allowing the first
player to defect in the second to last round. This reasoning proceeds backwards through the game tree until one
concludes that the best action is for the first player to defect in the first round. The same reasoning can apply to any
node in the game tree.
In the example pictured above, this reasoning proceeds as follows. If we were to reach the last round of the game,
Player 2 would do better by choosing d instead of r. However, given that 2 will choose d, 1 should choose D in the
second to last round, receiving 3 instead of 2. Given that 1 would choose D in the second to last round, 2 should
choose d in the third to last round, receiving 2 instead of 1. But given this, Player 1 should choose D in the first
round, receiving 1 instead of 0.
There are a large number of Nash equilibria in a centipede game, but in each, the first player defects on the first
round and the second player defects in the next round frequently enough to dissuade the first player from passing.
Being in a Nash equilibrium does not require that strategies be rational at every point in the game as in the subgame
perfect equilibrium. This means that strategies that are cooperative in the never-reached later rounds of the game
could still be in a Nash equilibrium. In the example above, one Nash equilibrium is for both players to defect on each
round (even in the later rounds that are never reached). Another Nash equilibrium is for player 1 to defect on the first
round, but pass on the third round and for player 2 to defect at any opportunity.

Empirical results
Several studies have demonstrated that the Nash equilibrium (and likewise, subgame perfect equilibrium) play is
rarely observed. Instead, subjects regularly show partial cooperation, playing "R" (or "r") for several moves before
eventually choosing "D" (or "d"). It is also rare for subjects to cooperate through the whole game. For examples see
McKelvey and Palfrey (1992) and Nagel and Tang (1998). As in many other game theoretic experiments, scholars
have investigated the effect of increasing the stakes. As with other games, for instance the ultimatum game, as the
stakes increase the play approaches (but does not reach) Nash equilibrium play.

Explanations
Since the empirical studies have produced results that are inconsistent with the traditional equilibrium analysis,
several explanations of this behavior have been offered. Rosenthal (1981) suggested that if one has reason to believe
her opponent will deviate from Nash behavior, then it may be advantageous to not defect on the first round.
One reason to suppose that people may deviate from the equilibria behavior is if some are altruistic. The basic idea is
that if you are playing against an altruist, that person will always cooperate, and hence, to maximize your payoff you
should defect on the last round rather than the first. If enough people are altruists, sacrificing the payoff of
first-round defection is worth the price in order to determine whether or not your opponent is an altruist. Nagel and
Tang (1998) suggest this explanation.
Another possibility involves error. If there is a significant possibility of error in action, perhaps because your
opponent has not reasoned completely through the backward induction, it may be advantageous (and rational) to
Centipede game 143

cooperate in the initial rounds.


However, Parco, Rapoport and Stein (2002) illustrated that the level of financial incentives can have a profound
effect on the outcome in a three-player game: the larger the incentives are for deviation, the greater propensity for
learning behavior in a repeated single-play experimental design to move toward the Nash equilibrium.
Palacios-Huerta and Volij (2009) show that expert chess players play differently than college students. With a rising
Elo, the probability of continuing the game declines; all Grandmasters in the experiment stopped at their first chance.
They conclude that chess players are familiar with using backward induction reasoning and hence need less learning
to reach the equilibrium.

Significance
Like the Prisoner's Dilemma, this game presents a conflict between self-interest and mutual benefit. If it could be
enforced, both players would prefer that they both cooperate throughout the entire game. However, a player's
self-interest or players' distrust can interfere and create a situation where both do worse than if they had blindly
cooperated. Although the Prisoner's Dilemma has received substantial attention for this fact, the Centipede Game has
received relatively less.
Additionally, Binmore (2005) has argued that some real-world situations can be described by the Centipede game.
One example he presents is the exchange of goods between parties that distrust each other. Another example
Binmore likens to the Centipede game is the mating behavior of an hermaphroditic sea bass which take turns
exchanging eggs to fertilize. In these cases, we find cooperation to be abundant.
Since the payoffs for some amount of cooperation in the Centipede game are so much larger than immediate
defection, the "rational" solutions given by backward induction can seem paradoxical. This, coupled with the fact
that experimental subjects regularly cooperate in the Centipede game has prompted debate over the usefulness of the
idealizations involved in the backward induction solutions, see Aumann (1995, 1996) and Binmore (1996).

References
• Aumann, R. (1995). "Backward Induction and Common Knowledge of Rationality". Games and Economic
Behavior 8 (1): 6–19. doi:10.1016/S0899-8256(05)80015-6.
• (1996). "A Reply to Binmore". Games and Economic Behavior 17 (1): 138–146. doi:10.1006/game.1996.0099.
• Binmore, K. (2005). Natural Justice. New York: Oxford University Press. ISBN 0195178114.
• (1996). "A Note on Backward Induction". Games and Economic Behavior 17 (1): 135–137.
doi:10.1006/game.1996.0098.
• McKelvey, R. & Palfrey, T. (1992). "An experimental study of the centipede game". Econometrica 60 (4):
803–836. doi:10.2307/2951567.
• Nagel, R. & Tang, F. F. (1998). "An Experimental Study on the Centipede Game in Normal Form: An
Investigation on Learning". Journal of Mathematical Psychology 42 (2–3): 356–384.
doi:10.1006/jmps.1998.1225.
• Palacios-Huerta, I. & Volij, O. (2009). "Field Centipedes". American Economic Review 99 (4): 1619–1635.
doi:10.1257/aer.99.4.1619.
• Parco, J. E.; Rapoport, A. & Stein, W. E. (2002). "Effects of financial incentives on the breakdown of mutual
trust". Psychological Science 13 (3): 292–297. doi:10.1111/1467-9280.00454.
• Rapoport, A.; Stein, W. E.; Parco, J. E. & Nicholas, T. E. (2003). "Equilibrium play and adaptive learning in a
three-person centipede game". Games and Economic Behavior 43 (2): 239–265.
doi:10.1016/S0899-8256(03)00009-5.
• Rosenthal, R. (1981). "Games of Perfect Information, Predatory Pricing, and the Chain Store". Journal of
Economic Theory 25 (1): 92–100. doi:10.1016/0022-0531(81)90018-1.
Centipede game 144

External links
• EconPort article on the Centipede Game [1]
• Rationality and Game Theory [2] - AMS column about the centipede game

References
[1] http:/ / www. econport. org/ econport/ request?page=man_gametheory_exp_centipede
[2] http:/ / www. ams. org/ featurecolumn/ archive/ rationality. html

Volunteer's dilemma
The volunteer's dilemma game models a situation in which each of N players faces the decision of either making a
small sacrifice from which all will benefit, or freeriding.
One example is a scenario in which the electricity has gone out for an entire neighborhood. All inhabitants know that
the electricity company will fix the problem as long as at least one person calls to notify them, at some cost. If no
one volunteers, the worst possible outcome is obtained for all participants. If any one person elects to volunteer, the
rest benefit by not doing so.[1]
A public good is only produced if at least one person volunteers to pay an arbitrary cost. In this game, bystanders
decide independently on whether to sacrifice themselves for the benefit of the group. Because the volunteer receives
no benefit, there is a greater incentive for freeriding than to sacrifice one’s self for the group. If no one volunteers,
everyone loses. The social phenomena of the bystander effect and diffusion of responsibility heavily relate to the
volunteer’s dilemma.

Payoff matrix
The payoff matrix for the game is shown below:

Volunteer's dilemma payoff matrix (example)


at least one other person cooperates all others defect

cooperate 0 0

defect 1 −10

When the volunteer's dilemma takes place between only two players, the game gets the character of the game
'chicken'. As seen by the payoff matrix, there is no dominant strategy in the volunteer’s dilemma. In a mixed-strategy
Nash equilibrium, an increase in N players will reduce the likelihood that any one player volunteers and will
decrease the likelihood that at least one person volunteers, which is a result of the bystander effect.
Volunteer's dilemma 145

Examples in real life


The story of Kitty Genovese is often used as a classic example of the volunteer's dilemma. Genovese was stabbed to
death in an alley where various residential apartments overlooked the assault. Although many people were aware of
the assault at the time (even though they may not have been aware of the exact scope and nature of the assault), few
people contacted the police.
It was assumed that people did not get involved because they thought others would contact the police and people did
not want to incur the costs of getting involved in the dispute.[2]
The meerkat exhibits the volunteer's dilemma in nature. One or more meerkats act as sentries while the others forage
for food. If a predator approaches, the sentry meerkat lets out a warning call so the others can burrow to safety.
However, the altruism of this meerkat puts it at risk of being discovered by the predator.

References
[1] Poundstone, William (1993). Prisoner's Dilemma: John von Neumann, Game Theory, and the Puzzle of the Bomb. New York: Anchor Books.
ISBN 038541580X.
[2] Weesie, Jeroen (1993). "Asymmetry and Timing in the Volunteer's Dilemma". Journal of Conflict Resolution 37 (3): 569–590.
JSTOR 174269.

Dollar auction
The dollar auction is a non-zero sum sequential game designed by economist Martin Shubik to illustrate a paradox
brought about by traditional rational choice theory in which players with perfect information in the game are
compelled to make an ultimately irrational decision based completely on a sequence of rational choices made
throughout the game.[1]

Setup
The setup involves an auctioneer who volunteers to auction off a dollar bill with the following rule: the dollar goes to
the highest bidder, who pays the amount he bids. The second-highest bidder also must pay the highest amount that he
bid, but gets nothing in return. Suppose that the game begins with one of the players bidding 1 cent, hoping to make
a 99 cent profit. He will quickly be outbid by another player bidding 2 cents, as a 98 cent profit is still desirable.
Similarly, another bidder may bid 3 cents, making a 97 cent profit. Alternatively, the first bidder may attempt to
convert their loss of 1 cent into a gain of 96 cents by bidding 4 cents. In this way, a series of bids is maintained.
However, a problem becomes evident as soon as the bidding reaches 99 cents. Supposing that the other player had
bid 98 cents, they now have the choice of losing the 98 cents or bidding a dollar even, which would make their profit
zero. After that, the original player has a choice of either losing 99 cents or bidding $1.01, and only losing one cent.
After this point the two players continue to bid the value up well beyond the dollar, and neither stands to profit.

References
[1] Shubik, Martin (1971). "The Dollar Auction Game: A Paradox in Noncooperative Behavior and Escalation". Journal of Conflict Resolution
15 (1): 109–111. doi:10.1177/002200277101500111.

Further reading
• Poundstone, William (1993). "The Dollar Auction". Prisoner's Dilemma: John Von Neumann, Game Theory, and
the Puzzle of the Bomb. New York: Oxford University Press. ISBN 019286162X.
Battle of the sexes 146

Battle of the sexes

Opera Football Opera Football

Opera 3,2 0,0 Opera 3,2 1,1

Football 0,0 2,3 Football 0,0 2,3

Battle of the Sexes 1 Battle of the Sexes 2

In game theory, battle of the sexes (BoS), also called Bach or Stravinsky,[1] is a two-player coordination game.
Imagine a couple that agreed to meet this evening, but cannot recall if they will be attending the opera or a football
match. The husband would most of all like to go to the football game. The wife would like to go to the opera. Both
would prefer to go to the same place rather than different ones. If they cannot communicate, where should they go?
The payoff matrix labeled "Battle of the Sexes (1)" is an example of Battle of the Sexes, where the wife chooses a
row and the husband chooses a column. In each cell, the first number represents the payoff to the wife and the
second number represents the payoff to the husband.
This representation does not account for the additional harm that might come from not only going to different
locations, but going to the wrong one as well (e.g. he goes to the opera while she goes to the football game,
satisfying neither). In order to account for this, the game is sometimes represented as in "Battle of the Sexes (2)".

Equilibrium analysis
This game has two pure strategy Nash equilibria, one where both go to the opera and another where both go to the
football game. For the first game, there is also a Nash equilibrium in mixed strategies, where the players go to their
preferred event more often than the other. For the payoffs listed above, each player attends their preferred event with
probability 3/5.
This presents an interesting case for game theory since each of the Nash equilibria is deficient in some way. The two
pure strategy Nash equilibria are unfair; one player consistently does better than the other. The mixed strategy Nash
equilibrium (when it exists) is inefficient. The players will miscoordinate with probability 13/25, leaving each player
with an expected return of 6/25 (less than the return one would receive from constantly going to one's less favored
event).
One possible resolution of the difficulty involves the use of a correlated equilibrium. In its simplest form, if the
players of the game have access to a commonly observed randomizing device, then they might decide to correlate
their strategies in the game based on the outcome of the device. For example, if the couple could flip a coin before
choosing their strategies, they might agree to correlate their strategies based on the coin flip by, say, choosing
football in the event of heads and opera in the event of tails. Notice that once the results of the coin flip are revealed
neither the husband nor wife have any incentives to alter their proposed actions – that would result in
miscoordination and a lower payoff than simply adhering to the agreed upon strategies. The result is that perfect
coordination is always achieved and, prior to the coin flip, the expected payoffs for the players are exactly equal.
Battle of the sexes 147

Working out the above


Let us calculate the four probabilities for the actions of the individuals (Man and Woman), which depend on their
expectations of the behaviour of the other, and the relative payoff from each action.
The Man either goes to the Football or the Opera (and not both or neither), and likewise the Woman.
The Probability that the man goes to the football game, , equals the payoff if he does (whether or not the
woman does), divided by the same payoff plus the payoff if he goes to the opera instead:

We know that she either goes to one or the other, so , so:

Similarly:

This forms a set of simultaneous equations. We can solve these, starting with for example, by substituting in
the equations above:

Remembering that , we can make this an equation where the only unknown is :

And then rearrange so that is only on one side:








Knowing that , we deduce:


Battle of the sexes 148

Then we can calculate the probability of coordination (that M and W do the same thing, independently), as:



And the probability of miscoordination (that M and W do different things, independently):



And just to check our probability working:


So the probability of miscoordination is as stated above.
The expected payoff E for each individual ( and ) is the probability of each event multiplied by the
payoff if it happens. For example, the Probability that the man goes to football and the Woman goes to football
multiplied by the Expected payoff to the man if that happens ( ):




Which is not the same as the stated above!
For comparison, let us assume that the man always goes to football and the woman, knowing this, chooses what to
do based on revised probabilities and expected values to her:





This is symmetric for if the woman always goes to the opera and the man chooses randomly with probabilities
based on the expected outcome, due to the symmetry in the value table. But if both players always do the same thing
(both have simple strategies), the payoff is just 1 for both, from the table above.
Battle of the sexes 149

Burning money

Opera Football Opera Football

Opera 4,1 0,0 Opera 2,1 -2,0

Football 0,0 1,4 Football -1,4


-2,0
Unburned
Burned

Interesting strategic changes can take place in this game if one allows one player the option of "burning money" –
that is, allowing that player to destroy some of her utility. Consider the version of Battle of the Sexes pictured here
(called Unburned). Before making the decision the row player can, in view of the column player, choose to set fire to
2 points making the game Burned pictured to the right. This results in a game with four strategies for each player.
The row player can choose to burn or not burn the money and also choose to play Opera or Football. The column
player observes whether or not the row player burns and then chooses either to play Opera or Football.
If one iteratively deletes weakly dominated strategies then one arrives at a unique solution where the row player does
not burn the money and plays Opera and where the column player plays Opera. The odd thing about this result is
that by simply having the opportunity to burn money (but not actually using it), the row player is able to secure her
favored equilibrium. The reasoning that results in this conclusion is known as forward induction and is somewhat
controversial. For a detailed explanation, see [2] p8 Section 4.5.

References
• Luce, R.D. and Raiffa, H. (1957) Games and Decisions: An Introduction and Critical Survey, Wiley & Sons. (see
Chapter 5, section 3).
• Fudenberg, D. and Tirole, J. (1991) Game theory, MIT Press. (see Chapter 1, section 2.4)
[1] Osborne, Rubinstein (1994). A course in game theory. The MIT Press.
[2] http:/ / www. umass. edu/ preferen/ Game%20Theory%20for%20the%20Behavioral%20Sciences/ BOR%20Public/
BOR%20Rationalizability. pdf

External links
• GameTheory.net (http://www.gametheory.net/dictionary/BattleoftheSexes.html)
• Cooperative Solution with Nash Function (http://www.egwald.ca/operationsresearch/cooperative.php) by
Elmer G. Wiens
Stag hunt 150

Stag hunt
In game theory, the stag hunt is a game which describes a conflict between safety and social cooperation. Other
names for it or its variants include "assurance game", "coordination game", and "trust dilemma". Jean-Jacques
Rousseau described a situation in which two individuals go out on a hunt. Each can individually choose to hunt a
stag or hunt a hare. Each player must choose an action without knowing the choice of the other. If an individual
hunts a stag, he must have the cooperation of his partner in order to succeed. An individual can get a hare by himself,
but a hare is worth less than a stag. This is taken to be an important analogy for social cooperation.
The stag hunt differs from the Prisoner's Dilemma in that there are two Nash equilibria: when both players cooperate
and both players defect. In the Prisoner's Dilemma, in contrast, despite the fact that both players cooperating is
Pareto efficient, the only Nash equilibrium is when both players choose to defect.
An example of the payoff matrix for the stag hunt is pictured in Figure 2.

Stag Hare Stag Hare

Stag A, a C, b Stag 2, 2 0, 1

Hare B, c D, Hare 1, 0 1, 1
d
Fig. 1: Generic stag Fig. 2: Stag hunt
hunt example

Formal definition
Formally, a stag hunt is a game with two pure strategy Nash equilibria - one that is risk dominant and another that is
payoff dominant. The payoff matrix in Figure 1 illustrates a stag hunt, where . Often, games with
a similar structure but without a risk dominant Nash equilibrium are called stag hunts. For instance if a=2, b=1, c=0,
and d=1. While (Hare, Hare) remains a Nash equilibrium, it is no longer risk dominant. Nonetheless many would call
this game a stag hunt.
In addition to the pure strategy Nash equilibria there is one mixed strategy Nash
equilibrium. This equilibrium depends on the payoffs, but the risk dominance condition
places a bound on the mixed strategy Nash equilibrium. No payoffs (that satisfy the
above conditions including risk dominance) can generate a mixed strategy equilibrium
where Stag is played with a probability higher than one half. The best response
correspondences are pictured here.
Stag hunt 151

The stag hunt and social cooperation


Although most authors focus on the prisoner's dilemma
as the game that best represents the problem of social
cooperation, some authors believe that the stag hunt
represents an equally (or more) interesting context in
which to study cooperation and its problems (for an
overview see Skyrms 2004).

There is a substantial relationship between the stag hunt


"Nature and Appearance of Deer" taken from "Livre du Roy Modus," and the prisoner's dilemma. In biology many
created in the 14th century
circumstances that have been described as prisoner's
dilemma might also be interpreted as a stag hunt,
depending on how fitness is calculated.

Cooperate Defect

Cooperate 2, 2 0, 3
Defect 3, 0 1, 1
Fig. 3: Prisoner's dilemma
example

It is also the case that some human interactions that seem like prisoner's dilemmas may in fact be stag hunts. For
example, suppose we have a prisoner's dilemma as pictured in Figure 3. The payoff matrix would need adjusting if
players who defect against cooperators might be punished for their defection. For instance, if the expected
punishment is -2, then the imposition of this punishment turns the above prisoner's dilemma into the stag hunt given
at the introduction.

Examples of the stag hunt


The original stag hunt dilemma is as follows: a group of hunters have tracked a large stag, and found it to follow a
certain path. If all the hunters work together, they can kill the stag and all eat. If they are discovered, or do not
cooperate, the stag will flee, and all will go hungry.
The hunters hide and wait along a path. An hour goes by, with no sign of the stag. Two, three, four hours pass, with
no trace. A day passes. The stag may not pass every day, but the hunters are reasonably certain that it will come.
However, a hare is seen by all hunters moving along the path.
If a hunter leaps out and kills the hare, he will eat. However, it results in the trap laid for the stag to be wasted, and
the others will starve. There is no certainty that the stag will arrive; the hare is present. The dilemma is that if one
hunter waits, he risks one of his fellows killing the hare for himself, sacrificing everyone else. This makes the risk
twofold; risk the stag never coming, or risk another man taking the kill.
In addition to the example suggested by Rousseau, David Hume provides a series of examples that are stag hunts.
One example addresses two individuals who must row a boat. If both choose to row they can successfully move the
boat. However if one doesn't, the other wastes his effort. Hume's second example involves two neighbors wishing to
drain a meadow. If they both work to drain it they will be successful, but if either fails to do his part the meadow will
not be drained.
Several animal behaviors have been described as stag hunts. One is the coordination of slime molds. In times of
stress, individual unicellular protists will aggregate to form one large body. Here if they all act together they can
successfully reproduce, but success depends on the cooperation of many individual protozoa. Another example is the
hunting practices of orcas (known as carousel feeding). Orcas cooperatively corral large schools of fish to the surface
Stag hunt 152

and stun them by hitting them with their tails. Since this requires that the fish have no way to escape, it requires the
cooperation of many orcas.

References
• Skyrms, Brian. (2004) The Stag Hunt and Evolution of Social Structure. Cambridge: Cambridge University Press.

External links
• The stag hunt at GameTheory.net [1]
• The stag hunt (pdf) [2] by Brian Skyrms

References
[1] http:/ / www. gametheory. net/ Dictionary/ Games/ StagHunt. html
[2] http:/ / www. lps. uci. edu/ home/ fac-staff/ faculty/ skyrms/ StagHunt. pdf

Matching pennies
Heads Tails

Heads +1, −1,


−1 +1
Tails −1, +1,
+1 −1
Matching pennies

Matching pennies is the name for a simple example game used in game theory. It is the two strategy equivalent of
Rock, Paper, Scissors. Matching pennies is used primarily to illustrate the concept of mixed strategies and a mixed
strategy Nash equilibrium.
The game is played between two players, Player A and Player B. Each player has a penny and must secretly turn the
penny to heads or tails. The players then reveal their choices simultaneously. If the pennies match (both heads or
both tails) Player A keeps both pennies, so wins one from Player B (+1 for A, -1 for B). If the pennies do not match
(one heads and one tails) Player B keeps both pennies, so receives one from Player A (-1 for A, +1 for B). This is an
example of a zero-sum game, where one player's gain is exactly equal to the other player's loss.
The game can be written in a payoff matrix (pictured right). Each cell of the matrix shows the two players' payoffs,
with Player A's payoffs listed first.
This game has no pure strategy Nash equilibrium since there is no pure strategy (heads or tails) that is a best
response to a best response. In other words, there is no pair of pure strategies such that neither player would want to
switch if told what the other would do. Instead, the unique Nash equilibrium of this game is in mixed strategies: each
player chooses heads or tails with equal probability.[1] In this way, each player makes the other indifferent between
choosing heads or tails, so neither player has an incentive to try another strategy. The best response functions for
mixed strategies are depicted on the figure 1 below:
Matching pennies 153

Figure 1. Best response correspondences for players in the matching pennies game. The leftmost mapping is for the coordinating player,
the middle shows the mapping for the discoordinating player. The sole Nash equilibrium is shown in the right hand graph. x is a
probability of playing heads by discoordinating player, y is a probability of playing heads by coordinating player. The unique intersection
is the only point where mys strategy of first player is the best response on the strategy of second and vice versa.

The matching pennies game is mathematically equivalent to the games "Morra" or "odds and evens", where two
players simultaneously display one or two fingers, with the winner determined by whether or not the number of
fingers match. Again, the only strategy for these games to avoid being exploited is to play the equilibrium.
Of course, human players might not faithfully apply the equilibrium strategy, especially if matching pennies is
played repeatedly. In a repeated game, if one is sufficiently adept at psychology, it may be possible to predict the
opponent's move and choose accordingly, in the same manner as expert Rock, Paper, Scissors players. In this way, a
positive expected payoff might be attainable, whereas against an opponent who plays the equilibrium, one's expected
payoff is zero.
Nonetheless, statistical analysis of penalty kicks in soccer—a high-stakes real-world situation that closely resembles
the matching pennies game—has shown that the decisions of kickers and goalies resemble a mixed strategy
equilibrium.[2] [3]

References
[1] GameTheory.net (http:/ / www. gametheory. net/ dictionary/ Games/ Matchingpennies. html)
[2] Chiappori, P.; Levitt, S.; Groseclose, T. (2002). "Testing Mixed-Strategy Equilibria When Players Are Heterogeneous: The Case of Penalty
Kicks in Soccer" (http:/ / pricetheory. uchicago. edu/ levitt/ Papers/ ChiapporiGrosecloseLevitt2002. pdf). American Economic Review 92 (4):
1138–1151. JSTOR 3083302. .
[3] Palacios-Huerta, I. (2003). "Professionals Play Minimax". Review of Economic Studies 70 (2): 395–415. doi:10.1111/1467-937X.00249.
Ultimatum game 154

Ultimatum game
The ultimatum game is a game often played in
economic experiments in which two players interact to
decide how to divide a sum of money that is given to
them. The first player proposes how to divide the sum
between the two players, and the second player can
either accept or reject this proposal. If the second
player rejects, neither player receives anything. If the
second player accepts, the money is split according to
the proposal. The game is played only once so that Extensive form representation of a two proposal ultimatum game.
reciprocation is not an issue. Player 1 can offer a fair (F) or unfair (U) proposal; player 2 can
accept (A) or reject (R).

Equilibrium analysis
For illustration, we will suppose there is a smallest division of the good available (say 1 cent). Suppose that the total
amount of money available is x.
The first player chooses some amount p in the interval [0,x]. The second player chooses some function f: [0, x] →
{"accept", "reject"} (i.e. the second chooses which divisions to accept and which to reject). We will represent the
strategy profile as (p, f), where p is the proposal and f is the function. If f(p) = "accept" the first receives p and the
second x−p, otherwise both get zero. (p, f) is a Nash equilibrium of the ultimatum game if f(p) = "accept" and there is
no y > p such that f(y) = "accept" (i.e. player 2 would reject all proposals in which player 1 receives more than p).
The first player would not want to unilaterally increase his demand since the second will reject any higher demand.
The second would not want to reject the demand, since he would then get nothing.
There is one other Nash equilibrium where p = x and f(y) = "reject" for all y>0 (i.e. the second rejects all demands
that gives the first any amount at all). Here both players get nothing, but neither could get more by unilaterally
changing his / her strategy.
However, only one of these Nash equilibria satisfies a more restrictive equilibrium concept, subgame perfection.
Suppose that the first demands a large amount that gives the second some (small) amount of money. By rejecting the
demand, the second is choosing nothing rather than something. So, it would be better for the second to choose to
accept any demand that gives her any amount whatsoever. If the first knows this, he will give the second the smallest
(non-zero) amount possible.[1]

Experimental results
In many cultures, people offer "fair" (i.e., 50:50) splits, and offers of less than 20% are often rejected.[2] One limited
study on monozygotic and dizygotic twins claims that genetic variation can affect reactions to unfair offers, though
the study failed to employ actual controls for environmental differences.[3]

Explanations
The highly mixed results (along with similar results in the Dictator game) have been taken to be both evidence for
and against the so-called "Homo economicus" assumptions of rational, utility-maximizing, individual decisions.
Since an individual who rejects a positive offer is choosing to get nothing rather than something, that individual must
not be acting solely to maximize his economic gain, unless one incorporates economic applications of social,
psychological, and methodological factors (such as the observer effect). Several attempts to explain this behavior are
available. Some authors suggest that individuals are maximizing their expected utility, but money does not translate
Ultimatum game 155

directly into expected utility.[4] Perhaps individuals get some psychological benefit from engaging in punishment or
receive some psychological harm from accepting a low offer. It could also be the case that the second player, by
having the power to reject the offer, uses such power as leverage against the first player, thus motivating him to be
fair.
The classical explanation of the ultimatum game as a well-formed experiment approximating general behaviour
often leads to a conclusion that the rational behavior in assumption is accurate to a degree, but must encompass
additional vectors of decision making. However, several competing models suggest ways to bring the cultural
preferences of the players within the optimized utility function of the players in such a way as to preserve the utility
maximizing agent as a feature of microeconomics. For example, researchers have found that Mongolian proposers
tend to offer even splits despite knowing that very unequal splits are almost always accepted. Similar results from
other small-scale societies players have led some researchers to conclude that "reputation" is seen as more important
than any economic reward.[5] Others have proposed the social status of the responder may be part of the payoff.[6]
Another way of integrating the conclusion with utility maximization is some form of inequity aversion model
(preference for fairness). Even in anonymous one-shot settings, the economic-theory suggested outcome of minimum
money transfer and acceptance is rejected by over 80% of the players.
An explanation which was originally quite popular was the "learning" model, in which it was hypothesized that
proposers’ offers would decay towards the sub game perfect NE (almost zero) as they mastered the strategy of the
game. (This decay tends to be seen in other iterated games). However, this explanation (bounded rationality) is less
commonly offered now, in light of empirical evidence against it.[7]
It has been hypothesised (e.g. by James Surowiecki) that very unequal allocations are rejected only because the
absolute amount of the offer is low. The concept here is that if the amount to be split were ten million dollars a 90:10
split would probably be accepted rather than spurning a million dollar offer. Essentially, this explanation says that
the absolute amount of the endowment is not significant enough to produce strategically optimal behaviour.
However, many experiments have been performed where the amount offered was substantial: studies by Cameron
and Hoffman et al. have found that the higher the stakes are the closer offers approach an even split, even in a 100
USD game played in Indonesia, where average 1995 per-capita income was 670 USD. Rejections are reportedly
independent of the stakes at this level, with 30 USD offers being turned down in Indonesia, as in the United States,
even though this equates to two week's wages in Indonesia.[8]

Neurologic explanations
Generous offers in the ultimatum game (offers exceeding the minimum acceptable offer) are commonly made. Zak,
Stanton & Ahmadi (2007)[9] showed that two factors can explain generous offers: empathy and perspective taking.
They varied empathy by infusing participants with intranasal oxytocin or placebo (blinded). They affected
perspective-taking by asking participants to make choices as both player 1 and player 2 in the ultimatum game, with
later random assignment to one of these. Oxytocin increased generous offers by 80% relative to placebo. Oxytocin
did not affect the minimum acceptance threshold or offers in the dictator game (meant to measure altruism). This
indicates that emotions drive generosity.
Rejections in the ultimatum game have been shown to be caused by adverse physiologic reactions to stingy
offers.[10] In a brain imaging experiment by Sanfey et al., stingy offers (relative to fair and hyperfair offers)
differentially activated several brain areas, especially the anterior insular cortex, a region associated with visceral
disgust. If Player 1 in the ultimatum game anticipates this response to a stingy offer, they may be more generous.
An increase in rational decisions in the game has been found among experienced Buddhist meditators. fMRI data
show that meditators recruit the posterior insular cortex (associated with interoception) during unfair offers and show
reduced activity in the anterior insular cortex compared to controls.[11]
People whose serotonin levels have been artificially lowered will reject unfair offers more often than players with
normal serotonin levels.[12]
Ultimatum game 156

This is true whether the players are on placebo or are infused with a hormone that makes them more generous in the
ultimatum game.[13] [14]
People who have ventromedial frontal cortex lesions were found to be more likely to reject unfair offers.[15] This was
suggested to be due to the abstractness and delay of the reward, rather than an increased emotional response to the
unfairness of the offer.[16]

Evolutionary game theory


Other authors have used evolutionary game theory to explain behavior in the ultimatum game.[17] Simple
evolutionary models, e.g. the replicator dynamics, cannot account for the evolution of fair proposals or for rejections.
These authors have attempted to provide increasingly complex models to explain fair behavior.

Sociological applications
The ultimatum game is important from a sociological perspective, because it illustrates the human unwillingness to
accept injustice. The tendency to refuse small offers may also be seen as relevant to the concept of honour.
The extent to which people are willing to tolerate different distributions of the reward from "cooperative" ventures
results in inequality that is, measurably, exponential across the strata of management within large corporations. See
also: Inequity aversion within companies.
Some see the implications of the ultimatum game as profoundly relevant to the relationship between society and the
free market, with Prof. P.J. Hill, (Wheaton College (Illinois)) saying:
I see the [ultimatum] game as simply providing counter evidence to the general presumption that participation
in a market economy (capitalism) makes a person more selfish.[18]

History
The first ultimatum game was developed in 1982 as a stylized representation of negotiation, by Güth, Schmittberger,
and Schwarze.[19] It has since become a popular economic experiment, and was said to be "quickly catching up with
the Prisoner's Dilemma as a prime showpiece of apparently irrational behavior" in a paper by Martin Nowak, Karen
M. Page, and Karl Sigmund.[20]

Variants
In the "competitive ultimatum game" there are many proposers and the responder can accept at most one of their
offers: With more than three (naïve) proposers the responder is usually offered almost the entire endowment[21]
(which would be the Nash Equilibrium assuming no collusion among proposers).
In the "ultimatum game with tipping", a tip is allowed from responder back to proposer, a feature of the trust game,
and net splits tend to be more equitable.[22]
The "reverse ultimatum game" gives more power to the responder by giving the proposer the right to offer as many
divisions of the endowment as they like. Now the game only ends when the responder accepts an offer or abandons
the game, and therefore the proposer tends to receive slightly less than half of the initial endowment.[23]
Robert Aumann's Blackmailer Paradox appears to be a repeated game in which the ultimatum game is played many
times by the same players for high stakes.
The pirate game illustrates a variant with more than two participants with voting power, as illustrated in Ian Stewart's
"A Puzzle for Pirates".[24]
Ultimatum game 157

Notes
[1] Technically, making a zero offer to the responder, and accepting this offer is also a Nash Equilibrium, as the responder's threat to reject the
offer is no longer credible since they now gain nothing (materially) by refusing the zero amount offered. Normally, when a player is
indifferent between various strategies the principle in Game Theory is that the strategy with an outcome which is Pareto optimally better for
the other players is chosen (as a sort of tie-breaker to create a unique NE). However, it is generally assumed that this principle should not
apply to an ultimatum game player offered nothing; she is instead assumed to reject the offer although accepting it would be an equally
subgame perfect NE. For instance, the University of Wisconsin summary: Testing Subgame Perfection Apart From Fairness in Ultimatum
Games (http:/ / econ. ucsd. edu/ ~jandreon/ Publications/ ExEc 2006. pdf) from 2002 admits the possibility that the proposer may offer
nothing but qualifies the subgame perfect NE with the words (almost nothing) throughout the Introduction.
[2] See Joseph Henrich et al. (2004) and Oosterbeek et al. (2004).
[3] http:/ / www. pnas. org/ content/ 105/ 10/ 3721. full. pdf+ html
[4] See Bolton (1991), and Ochs and Roth, A. E. (1989).
[5] Mongolian/Kazakh study conclusion (http:/ / www. psych. upenn. edu/ ~fjgil/ Ultimatum. pdf) from University of Pennsylvania.
[6] Social Role in the Ultimate Game (http:/ / radoff. com/ blog/ 2010/ 05/ 18/ social-role-ultimatum-game/ )
[7] A forthcoming paper "On the Behavior of Proposers in Ultimatum Games" Journal of economic behaviour and organization (http:/ / www.
elsevier. com/ wps/ find/ journaldescription. cws_home/ 505559/ description) has the thesis that learning will not cause NE-convergence: see
the abstract (http:/ / www. qmw. ac. uk/ ~ugte173/ abs/ abs. jebo2. html).
[8] See "Do higher stakes lead to more equilibrium play?" (page 18) in 3. Bargaining experiments (http:/ / www. iza. org/ teaching/
falk_WS2003/ falk_l3_bargaining. pdf), Professor Armin Falk's summary at the Institute for the Study of Labor (http:/ / www. iza. org/
index_html?mainframe=http:/ / www. iza. org/ teaching/ falk_WS2003).
[9] Zak PJ, Stanton AA, Ahmadi S (2007), Oxytocin Increases Generosity in Humans. PloSONE 2(11):e1128. (http:/ / www. plosone. org/
article/ info:doi/ 10. 1371/ journal. pone. 0001128;jsessionid=C12706C1A789233D0F59DDFE31C5FD25)
[10] Sanfey, et al. (2002)
[11] Kirk et al. (2011). "Interoception Drives Increased Rational Decision-Making in Meditators Playing the Ultimatum Game". Frontiers in
Neuroscience 5:49. PMC 3082218. PMID 21559066.
[12] Crockett, Molly J.; Luke Clark, Golnaz Tabibnia, Matthew D. Lieberman, Trevor W. Robbins (2008-06-05). "Serotonin Modulates
Behavioral Reactions to Unfairness" (http:/ / www. scn. ucla. edu/ pdf/ Crockett (2008). pdf) (– Scholar search (http:/ / scholar. google. co. uk/
scholar?hl=en& lr=& q=author:Crockett+ intitle:Serotonin+ Modulates+ Behavioral+ Reactions+ to+ Unfairness& as_publication=Science&
as_ylo=2008& as_yhi=2008& btnG=Search)). Science 320 (5884): 1155577. doi:10.1126/science.1155577. PMC 2504725. PMID 18535210.
. Retrieved 2008-06-22.
[13] Neural Substrates of Decision-Making in Economic Games Scientific Journals International (http:/ / www. scientificjournals. org/
journals2007/ articles/ 1176. pdf)
[14] Oxytocin Increases Generosity in Humans PloSONE 2(11):e1128 (http:/ / www. scientificjournals. org/ journals2007/ j_of_dissertation.
htm)
[15] Koenigs, Michael; Daniel Tranel (January 2007). "Irrational Economic Decision-Making after Ventromedial Prefrontal Damage: Evidence
from the Ultimatum Game". Journal of Neuroscience 27 (4): 951–956. doi:10.1523/JNEUROSCI.4606-06.2007. PMC 2490711.
PMID 17251437.
[16] Moretti, Laura; Davide Dragone, Giuseppe di Pellegrino (2009). "Reward and Social Valuation Deficits following Ventromedial Prefrontal
Damage". Journal of Cognitive Neuroscience 21 (1): 128–140. doi:10.1162/jocn.2009.21011. PMID 18476758.
[17] See, for example, Gale et al. (1995), Güth and Yaari (1992), Huck and Oechssler (1999), Nowak & Sigmund (2000) and Skyrms (1996)
[18] See The Ultimatum game detailed description (http:/ / www. fte. org/ capitalism/ activities/ ultimatum/ ) as a class room plan from
EconomicsTeaching.org. (This is a more thorough explanation of the practicalities of the game than is possible here.)
[19] Güth et al. (1982), page 367: the description of the game at Neuroeconomics (http:/ / neuroeconomics. typepad. com/ neuroeconomics/
2003/ 09/ what_is_the_ult. html) cites this as the earliest example.
[20] Nowak, M. A.; Page, K. M.; Sigmund, K. (2000). "Fairness Versus Reason in the Ultimatum Game". Science 289 (5485): 1773–1775.
doi:10.1126/science.289.5485.1773. PMID 10976075.
[21] Ultimatum game with proposer competition (http:/ / homepage. univie. ac. at/ christoph. hauert/ gamelab/ ultiproposer. html) by the
GameLab (http:/ / homepage. univie. ac. at/ christoph. hauert/ gamelab/ ).
[22] Ruffle (1998), p. 247.
[23] The reverse ultimatum game and the effect of deadlines is from Gneezy, Haruvy, & Roth, A. E. (2003).
[24] Stewart, Ian (May 1999). "A Puzzle for Pirates" (http:/ / euclid. trentu. ca/ math/ bz/ pirates_gold. pdf). Scientific American 05: 98–99. .
Retrieved 3/11/2011.
Ultimatum game 158

References
• Alvard, M. (2004). "The Ultimatum Game, Fairness, and Cooperation among Big Game Hunters" (http://
anthropology.tamu.edu/faculty/alvard/downloads/ultimatum.pdf). In Henrich, J., Boyd, R., Bowles, S.,
Gintis, H., Fehr, E., and Camerer, C.. Foundations of Human Sociality: Ethnography and Experiments in 15
small-scale societies. Oxford University Press. pp. 413–435.
• Bearden, J. Neil (2001). "Ultimatum Bargaining Experiments: The State of the Art" (http://papers.ssrn.com/
sol3/papers.cfm?abstract_id=626183).
• Bicchieri, Cristina and Jiji Zhang (2008). "An Embarrassment of Riches: Modeling Social Preferences in
Ultimatum games", in U. Maki (ed) Handbook of the Philosophy of Economics, Elsevier
• Bolton, G.E. (1991). "A comparative Model of Bargaining: Theory and Evidence". American Economic Review
81: 1096–1136.
• Gale, J., Binmore, K.G., and Samuelson, L. (1995). "Learning to be Imperfect: The Ultimatum Game". Games
and Economic Behavior 8: 56–90. doi:10.1016/S0899-8256(05)80017-X.
• Gneezy, Haruvy, and Roth, A. E. (2003). "Bargaining under a deadline: evidence from the reverse ultimatum
game" (http://gsbwww.uchicago.edu/fac/uri.gneezy/vita/deadline.pdf) (PDF – Scholar search (http://scholar.
google.co.uk/scholar?hl=en&lr=&q=intitle:Bargaining+under+a+deadline:+evidence+from+the+reverse+
ultimatum+game&as_publication=Games+and+Economic+Behavior&as_ylo=2003&as_yhi=2003&
btnG=Search)). Games and Economic Behavior 45 (2): 347. doi:10.1016/S0899-8256(03)00151-9.
• Güth, W., Schmittberger, and Schwarze (1982). "An Experimental Analysis of Ultimatum Bargaining". Journal of
Economic Behavior and Organization 3 (4): 367–388. doi:10.1016/0167-2681(82)90011-7.
• Güth, W. and Yaari, M. (1992). "An Evolutionary Approach to Explain Reciprocal Behavior in a Simple Strategic
Game". In U. Witt. Explaining Process and Change – Approaches to Evolutionary Economics. Ann Arbor.
pp. 23–34.
• Henrich, Joseph, Robert Boyd, Samuel Bowles, Colin Camerer, Ernst Fehr, and Herbert Gintis (2004).
Foundations of Human Sociality: Economic Experiments and Ethnographic Evidence from Fifteen Small-Scale
Societies. Oxford University Press.
• Huck, S. and Oechssler, J. (1999). "The Indirect Evolutionary Approach to Explaining Fair Allocations". Games
and Economic Behavior 28: 13–24. doi:10.1006/game.1998.0691.
• Ochs, J. and Roth, A. E. (1989). "An Experimental Study of Sequential Bargaining". American Economic Review
79: 355–384.
• Oosterbeek, Hessel, Randolph Sloof, and Gijs van de Kuilen (2004). "Cultural Differences in Ultimatum Game
Experiments: Evidence from a Meta-Analysis". Experimental Economics 7 (2): 171–188.
doi:10.1023/B:EXEC.0000026978.14316.74.
• Ruffle, B.J. (1998). "More is Better, but Fair is Fair: Tipping in Dictator and Ultimatum Games". Games and
Economic Behavior 23 (2): 247. doi:10.1006/game.1997.0630..
• Sanfey et al.; Rilling, JK; Aronson, JA; Nystrom, LE; Cohen, JD (2002). "The neural basis of economic
decision-making in the ultimatum game". Science 300 (5626): 1755–1758. doi:10.1126/science.1082976.
PMID 12805551..
• Skyrms, B. (1996). Evolution of the Social Contract. Cambridge University Press.
• Zak, P.J., Stanton, A.A., Ahmadi, S. (2007). Brosnan, Sarah. ed. "Oxytocin Increases Generosity in Humans"
(http://www.neuroeconomicstudies.org/pdf/ZakGenerosity.pdf) (PDF). Public Library of Science ONE 2
(11): e1128.. doi:10.1371/journal.pone.0001128. PMC 2040517. PMID 17987115.
• Angela A. Stanton (2007). "Neural Substrates of Decision-Making in Economic Games" (http://www.
scientificjournals.org/journals2007/articles/1176.pdf) (PDF). Scientific Journals International 1 (1): 1–64..
Ultimatum game 159

Further reading
• Stanton, Angela (2006). Evolving Economics: Synthesis (http://mpra.ub.uni-muenchen.de/2369/).

External links
• Video lecture on the ultimatum game (http://www.youtube.com/watch?v=xpkxLKV_3d0)
• Game-tree based analysis of the ultimatum game (http://www.altruists.org/348)
Rock-paper-scissors 160

Rock-paper-scissors
Rock-paper-scissors

Rock-paper-scissors chart
Years active Chinese Han Dynasty to present

Genre(s) Hand game

Players 2

Setup time None

Playing time Instant

Random chance High

Skill(s) required Luck, psychology

Rock-paper-scissors is a hand game played by two people. The game is also known as roshambo, or another
ordering of the three items (with "stone" sometimes substituting for "rock").[1] [2]
The game is often used as a choosing method in a way similar to coin flipping, drawing straws, or throwing dice.
However, unlike truly random selection methods, rock-paper-scissors can be played with a degree of skill, especially
if the game extends over many sessions with the same players; it is often possible to recognize and exploit the
non-random behavior of an opponent.[3]

Game play

Each of the three basic hand-signs (from left to right: rock, paper, and scissors) beats one of the other two, and loses to the other.

The players count aloud to three, or speak the name of the game (e.g. "Rock Paper Scissors!" or "Ro! Cham!
Beau!"), each time raising one hand in a fist and swinging it down on the count. After the third count (saying,
"Scissors!" or "Beau!"), the players change their hands into one of three gestures, which they then "throw" by
extending it towards their opponent. Variations include a version where players use a fourth
count—"Shoot!"—before throwing their gesture, or a version where they only shake their hands twice before
"throwing." The gestures are:
• Rock, represented by a clenched fist.
• Scissors, represented by two fingers extended and separated.
• Paper, represented by an open hand, with the fingers connected (horizontal).
The objective is to select a gesture which defeats that of the opponent. Gestures are resolved as follows:
Rock-paper-scissors 161

• Rock blunts or breaks scissors: that is, rock defeats scissors.


• Scissors cut paper: scissors defeats paper.
• Paper covers, sands or captures rock: paper defeats rock.
If both players choose the same gesture, the game is tied and the players throw again.

History
According to the Chinese Ming Dynasty writer Xie Zhaozhi's (谢肇淛) book Wuzazu (五杂组), the first known
mention of the game, the game dates back to the time of the Chinese Han Dynasty (206 BCE – 220 CE),[4] it was
called shoushiling (手势令). Li Rihua's (李日华) book Note of Liuyanzhai (六砚斋笔记) also reveals this game,
calling it shoushiling (手势令), huozhitou (豁指头), or huoquan (豁拳).
In the 18th century these games were popular in Japan. It's known in Japan as Jan-ken-pon (じゃんけんぽん), more
commonly called janken (じゃんけん) and sometimes called rock ken (石拳 ishiken). The origin or the derivation
of the name is unknown. Ken (拳 ken) is a fist in Japanese and Jan-ken-po is categorized as a "ken (fist) games"
(拳遊び ken asobi). Janken is believed to have been based on two older ken games, sū ken (数拳, number competing
game with fingers, likely similar to or identical with Morra) and san sukumi ken (三すくみ拳, san sukumi means the
freezing aspects of a snake, frog, and slug with fear). San sukumi ken has existed in Japan since ancient times, and sū
ken was imported from China in the late 17th century; the name in China of sū ken is shǒushìlìng (手勢令). Ken
games began to increase in popularity in the middle of the 19th century.
Rock-paper-scissors came to be played all over the world by the 20th century.

Variations
There is a simpler variation of the game, where instead of three possibilities, the game is just played with two
possibilities. This version is called matching pennies, the two strategy equivalent to rock-paper-scissors.
Players have developed numerous cultural and personal variations on the game, from simply playing the same game
with different objects, to expanding into more weapons and rules.
• Rock-paper-scissors is frequently played in a "best two out of three" match, and in many cases psychs-out,
shouting, and trick gestures are performed to confuse or trick the other player into throwing an illegal toss
resulting in a loss. Some players prefer to shout the name of a throw they do not intend to throw in order to
misdirect and confuse their opponent. It generally applies that what is thrown is what is officially counted in the
match. For instance, yelling Scissors or Paper, and actually throwing Rock. The rock is what is judged and tallied.
During tournaments, players often prepare their sequence of three gestures prior to the tournament's
commencement.[5] [6]

Additional weapons
With an odd number of choices, each beats half the weapons and loses to half the weapons. No even number of
weapons can be made balanced, unless some pairs of weapons result in a draw; there will always be some weapons
superior to others.
An example of an unbalanced four-weapon game adds "dynamite" as a trump. Dynamite, expressed as the extended
index finger or thumb, always defeats rock, and is defeated by scissors. Using dynamite generally implies that
dynamite burns paper, but some claim that paper would smother the fuse. The fourth option of dynamite changes
each gesture's odds of winning. For instance, scissors' odds improve from 33% to 50% while rock's odds decrease
from 33% to 25%. Dynamite can be used to cheat by quickly raising or lowering the thumb on the downstroke once
the opponent's play is recognized. Organized rock-paper-scissors contests never use dynamite.[7]
Rock-paper-scissors 162

Similarly, the French game "pierre, papier, ciseaux, puits" (stone, paper, scissors, well) is unbalanced; both the rock
and scissors fall in the well and lose to it, while paper covers both rock and well. This means two "weapons", well
and paper, can defeat two moves, while the last two weapons can only defeat one of the other three choices.
One popular five-weapon expansion, invented by Sam Kass and Karen Bryla,[8] adds "Spock" and "lizard" to the
standard three. "Spock" is signified with the Star Trek Vulcan salute, while "lizard" is shown by forming the hand
into a sock-puppet-like mouth. Spock smashes scissors and vaporizes rock; he is poisoned by lizard and disproved by
paper. Lizard poisons Spock and eats paper; it is crushed by rock and decapitated by scissors. This variant was
mentioned in a 2005 article of The Times[9] and appeared in an episode of the sitcom The Big Bang Theory, "The
Lizard-Spock Expansion," in 2008. As long as the number of moves is an odd number and that each move defeats
exactly half of the other moves while being defeated by the other half, any combination of moves will function as a
game. For example, 7, 9, 11, 15, 25 and 101 weapon versions exist[10] Adding new gestures has the effect of
reducing the odds of a tie, while increasing the complexity of the game. The probability of a tie in a balanced,
odd-weapon game can be calculated based on the number of weapons n as 1/n, so the probability of a tie is 1/3 in
RPS, 1/5 in RPSLS and 1/101 in RPS101. It is possible to design balanced RPS games with an even number of
weapons; unfortunately, this requires the introduction of ties. For instance, dynamite could be introduced such that
dynamite defeats rock and paper defeats dynamite while rock and paper tie, as do scissors and dynamite. The
probability of a tie in a balanced, even-weapon RPS game with n weapons (assuming each weapon ties with itself
and only one other weapon, while defeating half of the remaining weapons and being beaten by the other half) can be
calculated as 2/n, which essentially doubles the probability of a tie in comparison with odd-weapon RPS games.

Instances of use in real-life scenarios

American case
In 2006, American federal judge Gregory Presnell from the Middle District of Florida ordered opposing sides in a
lengthy court case to settle a trivial (but lengthily debated) point over the appropriate place for a deposition using the
game of rock-paper-scissors.[11] The ruling in Avista Management v. Wausau Underwriters stated:
Upon consideration of the Motion – the latest in a series of Gordian knots that the parties have been unable to
untangle without enlisting the assistance of the federal courts – it is ORDERED that said Motion is DENIED.
Instead, the Court will fashion a new form of alternative dispute resolution, to wit: at 4:00 P.M. on Friday,
June 30, 2006, counsel shall convene at a neutral site agreeable to both parties. If counsel cannot agree on a
neutral site, they shall meet on the front steps of the Sam M. Gibbons U.S. Courthouse, 801 North Florida
Ave., Tampa, Florida 33602. Each lawyer shall be entitled to be accompanied by one paralegal who shall act
as an attendant and witness. At that time and location, counsel shall engage in one game of "rock, paper,
scissors." The winner of this engagement shall be entitled to select the location for the 30(b)(6) deposition to
Rock-paper-scissors 163

be held somewhere in Hillsborough County during the period July 11–12, 2006.[12]
The public release of this judicial order, widely circulated among area lawyers, was seemingly intended to shame the
respective law firms regarding their litigation conduct by settling the dispute in a farcical manner.

Auction house rock-paper-scissors match


When Takashi Hashiyama, CEO of a Japanese television equipment
manufacturer, decided to auction off the collection of impressionist
paintings owned by his corporation, including works by Cézanne,
Picasso, and van Gogh, he contacted two leading auction houses,
Christie's International and Sotheby's Holdings, seeking their proposals
on how they would bring the collection to the market as well as how
they would maximize the profits from the sale. Both firms made
elaborate proposals, but neither was persuasive enough to get
Hashiyama’s business. Unwilling to split up the collection into separate
auctions, Hashiyama asked the firms to decide between themselves
who would hold the auction, which included Cézanne's "Large Trees
Under the Jas de Bouffan", worth $12–16 million.

The houses were unable to reach a decision. Hashiyama told the two
firms to play rock-paper-scissors to decide who would get the rights to Large Trees Under the Jas de Bouffan sold for
the auction, explaining that "it probably looks strange to others, but I [13]
$11,776,000 at Christie's.
believe this is the best way to decide between two things which are
equally good".
The auction houses had a weekend to come up with a choice of move. Christie's went to the 11-year-old twin
daughters of the international director of Christie's Impressionist and Modern Art Department Nicholas Maclean,
who suggested "scissors" because "Everybody expects you to choose 'rock'." Sotheby's said that they treated it as a
game of chance and had no particular strategy for the game, but went with "paper".[14]
Christie's won the match and sold the twenty million dollar collection, with millions of dollars of commission for the
auction house.

Japanese girl group single participation


Japanese idol girl group AKB48 used in 2010 a rock-paper-scissors contest to determine which of the members
would participate in a single. They used it again on September 20, 2011 with the contest being broadcast in four
countries. The final winner was Mariko Shinoda.[15] [16] [17] [18]

Rock-paper-scissors in video games


In many real-time strategy, first-person shooter, and role-playing video games, it is common for a group of possible
weapons or unit types to interact in a rock-paper-scissors style, where each selection is strong against a particular
choice, but weak against another, emulating the cycles in real world warfare (such as cavalry being strong against
archers, archers being strong against pikemen, and pikemen being strong against cavalry[19] ). Such game mechanics
can make a game somewhat self-balancing, and prevent gameplay from being overwhelmed by a single dominant
strategy.[19]
Many card-based video games in Japan use the rock-paper-scissors system as their core fighting system, with the
winner of each round being able to carry out their designated attack. Other games use simple variants of
rock-paper-scissors as subgames.
Rock-paper-scissors 164

Rock-paper-scissors analogs in nature

Lizard mating strategies


The common side-blotched lizard (Uta stansburiana) exhibits a rock-paper-scissors pattern in its mating strategies.
Of its three color types of males, "orange beats blue, blue beats yellow, and yellow beats orange" in competition for
females, which is similar to the rules of rock-paper-scissors.[20] [21]

Coliform bacteria
Some bacteria also exhibit a rock-paper-scissors dynamic when they engage in antibiotic production. The theory for
this finding was demonstrated by computer simulation and in the laboratory by Benjamin Kerr, working at Stanford
University with Brendan Bohannan.[22] The antibiotics in question are the bacteriocins - more specifically, colicins
produced by Escherichia coli. Biologist Benjamin C. Kirkup, Jr. further demonstrated that the colicins were active as
E. coli compete with each other in the intestines of mice, and that the rock-paper-scissors dynamics allowed for the
continued competition among strains: antibiotic-producers defeat antibiotic-sensitives; antibiotic-resisters multiply
and withstand and out-compete the antibiotic-producers, letting antibiotic-sensitives multiply and out-compete
others; until antibiotic-producers multiply again.[23]

Strategies
It is easy to see that it is impossible to gain an advantage over a truly random opponent. However, by exploiting the
weaknesses of nonrandom opponents, it is possible to gain a significant advantage.[24] Indeed, human players tend to
be nonrandom.[25] As such, there have been programming competitions for algorithms that play
rock-paper-scissors.[24] [26] [27]

Algorithms
As a consequence of rock-paper-scissors programming contests, many strong algorithms have emerged.[24] [26] [27]
For example, Iocaine Powder, which won the First International RoShamBo Programming Competition in 1999,[26]
uses a heuristically designed compilation of strategies.[28] For each strategy it employs, it also has six metastrategies
which defeat second-guessing, triple-guessing, as well as second-guessing the opponent, and so on. The optimal
strategy or metastrategy is chosen based on past performance. The main strategies it employs are history matching,
frequency analysis, and random guessing. Its strongest strategy, history matching, searches for a sequence in the past
that matches the last few moves in order to predict the next move of the algorithm. In frequency analysis, the
program simply identifies the most frequently played move. The random guess is a fallback method that is used to
prevent a devastating loss in the event that the other strategies fail. More than ten years later, the top performing
strategies on an ongoing rock-paper-scissors programming competition [29] similarly use metastrategies.[30]
However, there have been some innovations, such as using multiple history matching schemes that each match a
different aspect of the history - for example, the opponent's moves, the program's own moves, or a combination of
both.[30] There have also been other algorithms based on Markov chains.[31]
Rock-paper-scissors 165

Tournaments

World Rock Paper Scissors Society sanctioned


tournaments
Starting in 2002, the World Rock Paper Scissors Society standardized a
set of rules for international play[32] and has overseen annual
International World Championships. These open, competitive
championships have been widely attended by players from around the
world and have attracted widespread international media attention.[33]
[34] [35] [36] [37] Two players at the 4th UK Rock Paper Scissors
WRPS events are noted for their large cash prizes,
[38] Championships, 2010.
elaborate staging, and colorful competitors. In 2004, the
championships were broadcast on the U.S. television network Fox
Sports Net, with the winner being Lee Rammage, who went on to compete in at least one subsequent
championship.[39] [40]

Year World Champion Country

2002 Peter Lovering Canada

2003 Rob Krueger Canada

2004 Lee Rammage Canada

2005 Andrew Bergel Canada

2006 Bob Cooper United Kingdom

2007 [41] USA


Andrea Farina

2008 Monica Martinez Canada

2009 Tim Conrad USA

USARPS Tournaments
USA Rock Paper Scissors League is a US-based rock-paper-scissors league. It is sponsored by Bud Light. Leo Bryan
Pacis is the commissioner of the USARPS.
In April 2006, the inaugural USARPS Championship was held in Las Vegas. Following months of regional
qualifying tournaments held across the US, 257 players were flown to Las Vegas for a single-elimination tournament
at the House of Blues where the winner received $50,000. The tournament was shown on the A&E Network on June
12, 2006.
The $50,000 2007 USARPS Tournament took place at the Las Vegas Mandalay Bay in May 2007.
In 2008, Sean "Wicked Fingers" Sears beat out 300 other contestants and walked out of the Mandalay Bay Hotel &
Casino with $50,000 after defeating Julie "Bulldog" Crossley in the finals.
The inaugural Budweiser International Rock, Paper, Scissors Federation Championship was held in Beijing, China
after the close of the 2008 Summer Olympic Games at Club Bud. A Belfast man won the competition.[42] Sean
finished 3rd.
Rock-paper-scissors 166

National XtremeRPS Competition 2007-2008


The XtremeRPS National Competition[43] is a US nationwide RPS competition with Preliminary Qualifying contests
that started in January 2007 and ended in May 2008, followed by regional finals in June and July 2008. The national
finals were to be held in Des Moines, Iowa in August 2008, with a chance to win up to $5,000.

UK Rock Paper Scissors Championship


The 1st UK Championship took place on July 13, 2007, and then again on July 14, 2008, in Rhayader, Powys. Steve
Frost of Powys is the current holder of this WRPS sanctioned event.
The 3rd UK Championships took place on June 9, 2009, in Exeter, Devon. Nick Hemley, from Woking, Surrey, won
the contest just beating Chris Grimwood.[44]
The 4th UK Championships took place on November 13, 2010, at the Durell Arms in West London. Paul Lewis from
Woking beat Ed Blake in the final and collected the £100 first prize and UK title. Richard Daynes Appreciation
Society won the team event. 80 competitors took part in the main contest and 10 entries in the team contest.
The 5th UK Rock Paper Scissors Championships took place in London on Saturday 22 October 2011.[45] The event
was open to 256 competitors. There is also a team contest. The 2011 singles tournament was won by Max Deeley
and the team contest won by The Big Faces (Andrew Bladon, Jamie Burland, Tom Wilkinson and Captain Joe
Kenny).[46]

Guinness Book of World Records


On April 3, 2009, Colonel By Secondary School in Ottawa, Canada, held the largest recorded rock-paper-scissors
tournament, with approximately 1150 participants. The contest was throughout all the Grade 9-12s, and included
teachers. The winner, Cody Lombardo, took home a trophy, and had his name in the Guinness Book of World
Records.[47]
On July 9, 2010, over 6500 attendees of the LIFE 2010 Conference in Louisville, Kentucky, participated in the
largest tournament of Rock-Paper-Scissors ever, shattering the previous record of 1150 participants.

World Series of Rock Paper Scissors


Former Celebrity Poker Showdown host and USARPS Head Referee[48] Phil Gordon has hosted an annual $500
World Series of Rock Paper Scissors event in conjunction with the World Series of Poker since 2005.[49] The winner
of the WSORPS receives an entry into the WSOP Main Event. The event is an annual fundraiser for the "Cancer
Research and Prevention Foundation" via Gordon's charity Bad Beat on Cancer. Poker player Annie Duke won the
Second Annual World Series of Rock Paper Scissors.[50] [51] The tournament is taped by ESPN and highlights are
covered during "The Nuts" section of ESPN's annual WSOP broadcast.[52] [53] [54] 2009 was the fifth year of the
tournament.

Red Bull Roshambull World Online Series


The Red Bull Roshambull is a Recognized Unofficial event by the World RPS which takes place over the Facebook
application "Red Bull Roshambull". Although originally an unrecognized event, in January 2011 it was given a
Official Recognized event status due to the number of people who regularly take part in the World Championships
and other recognized leagues starting to compete. However the event is still seen as a non-ranking event and any
awards or titles given in the tournament are not officially recognized outside the event.
The World Series is a multiple-tournament contest in which a player's performance in each separate tournament is
scored, and after a number of tournaments within the event have taken place, a triple-elimination playoff takes place
to decide a overall championship. Over the last few seasons, extra events have been added to the series, the most
popular of them being the "Hidden Stars" (a tournament for novice players on the application who may not know
Rock-paper-scissors 167

about the event being given a chance to compete without any regulars taking part) and the World Series Blitz (where
all the events take place over one day rather than once a week like in the main event).
The key feature of this event is the number of players who are not known as people who play in real-life tournaments
who show a lot of ambition to try to compete with those who play in the most recognized events on a regular basis.
This has created a small community of players as well as a small rivalry between both sets of players to prove who is
better.
The Tournament was wound up in 2012 after Red Bull's support for the application was withdrawn, and Facebook's
change of coding meant that the application was forced to be removed. The World RPS along with the Hardcore fans
of the game have started discussing with the people who helped Red Bull run the game about recreating the
application under a new name due to the games popular demand from the regular users of the game.

Season Overall result League phase Hidden Stars World Series World Team Challenge
Blitz

Summer 2009 1ST Alan Giles 1ST Mark Not Competed in This season
2nd Mark Thomas
Thomas 2nd Monika
3rd Frances Hjelmas
Anne Ricketts 3rd Dave
Dungan

Winter 2009/10 1ST Frances 1ST Frances Not Competed in This season
Anne Ricketts Anne Ricketts
2nd Mark 2nd Clayton
Thomas Dwyer
3rd Roman P 3rd Andrew
Lampman

Summer 2010 1ST Brett 1ST Andrew 1ST Lex Not Competed in 1ST Norway Erik Westnes, Anna
McFarlane Lampman Harrison this season
Kvalheim, Monika Hjelmas
2nd Tom 2nd Brett 2nd Jaden 2nd African Nations Mac
Bussineau McFarlane Urquhart
Anamourlis, Craig Von Hagen, Jakqui Krobo
3rd Dave 3rd Mark 3rd Aaron
Dungan Tuden Huehn 3rd England James Heyes,Sarah

Dixon, Andy Mills

Winter 2010/11 1ST Richard 1ST Carol 1ST Amanda 1ST Andrew Not competed in this season
Morgan Lampman Parsons Lampman
2nd Andrew 2nd Andrew 2nd Leona 2nd Carol
Lampman Lampman Colton Lampman
3rd Carol 3rd Richard 3rd Braxton 3rd Mark
Lampman Morgan R. Rodgers Tuden

2011 (Changed to Not yet started 1ST Andrew Not Held this 1ST Andrew 1ST USA Tom Bussineau, Mark
calendar year seasons) Lampman year Lampman Thomas, Mark Tuden
2nd Nikki 2nd Carol 2nd Norway Erik Westnes, Anna
Montoya Lampman Kvalheim, Monika Hjelmas
3rd Carol 3rd Anthony 3rd Australia Brett McFarlane,
Lampman Argyou Clayton Dwyer, Frances Anne Ricketts

In December 2010, a player called Maxamillion Air became one of the first online-only players from the World
Series to play in an official event.
Rock-paper-scissors 168

References
Notes
[1] "Game Basics" (http:/ / www. rpschamps. com/ templates/ j15_rps/ images/ propaganda/ faq/ game-basics/ ). . Retrieved 2009-12-05.
[2] St. John, Kelly (2003-03-19). "Ready, set ... Roshambo! Contestants vie for $1,000 purse in Rock, Scissors, Paper contest" (http:/ / www.
sfgate. com/ cgi-bin/ article. cgi?f=/ c/ a/ 2003/ 03/ 16/ BA251812. DTL). San Francisco Chronicle. . Retrieved 2007-11-20.
[3] Fisher, Len (2008). Rock, paper, scissors: game theory in everyday life. Basic Books. p. 94. ISBN 9780465009381.
[4] Moore, Michael E.; Sward, Jennifer (2006). Introduction to the game industry. Upper Saddle River, NJ: Pearson Prentice Hall. p. 535.
ISBN 9780131687431.
[5] Steve Vockrodt, "Student rivals throw down at rock, paper, scissors tournament" (http:/ / www2. ljworld. com/ news/ 2007/ apr/ 08/
student_rivals_throw_down_rock_paper_scissors_tour/ ), Lawrence Journal-World, April 8, 2007. Retrieved April 13, 2007.
[6] Michael Y. Park, "Rock, Paper, Scissors, the Sport" (http:/ / www. foxnews. com/ story/ 0,2933,188380,00. html), Fox News, March 20,
2006. Retrieved April 13, 2007.
[7] World RPS Society (2002). elephant kills human human kills ant and ant crawls up elephant nose and kills it "The Myth of Dynamite
Exposed" (http:/ / www. worldrps. com/ article4. html). elephant kills human human kills ant and ant crawls up elephant nose and kills it.
Retrieved 2007-11-09.
[8] Sam Kass. "Original Rock-Paper-Scissors-Spock-Lizard Page" (http:/ / www. samkass. com/ theories/ RPSSL. html). . Retrieved 2009-03-11.
[9] "... and paper scissors" (http:/ / www. timesonline. co. uk/ tol/ comment/ leading_article/ article1080425. ece). London: The Times Online. 11
June 2005. . Retrieved 2009-06-09.
[10] "RPSx" (http:/ / www. umop. com/ rps. htm). .
[11] "Exasperated judge resorts to child's game" (http:/ / seattletimes. nwsource. com/ html/ nationworld/ 2003052251_game10. html). The
Seattle Times. Associated Press. 2006-06-26. . Retrieved 2006-08-20.
[12] Presnell, Gregory (June 7, 2006). "Order of the court: Avista Management vs. Wausau Underwriters Insurance Co." (http:/ / money. cnn.
com/ 2006/ 06/ 07/ magazines/ fortune/ judgerps_fortune/ index. htm). CNN.com. . Retrieved 2006-06-08.
[13] Art/Auctions logo, Impressionist & Modern Art, Christie's, 7 pm, May 4, 2005, Sale 1514 (http:/ / www. thecityreview. com/ s05cimp1.
html).
[14] Vogel, Carol (April 29, 2005). "Rock, Paper, Payoff: Child's Play Wins Auction House an Art Sale" (http:/ / www. nytimes. com/ 2005/ 04/
29/ arts/ design/ 29scis. html). The New York Times.
[15] "AKB48 to broadcast rock-paper-scissors tournament in Singapore + Thailand" (http:/ / www. tokyohive. com/ 2011/ 09/
akb48-to-broadcast-rock-paper-scissors-tournament-in-singapore-thailand/ ). Tokyohive. September 12, 2011. . Retrieved October 6, 2011.
[16] "AKB48’s second "Rock, Paper, Scissors" Tournament confirmed" (http:/ / www. tokyohive. com/ 2011/ 07/
akb48s-second-rock-paper-scissors-tournament-confirmed/ ). Tokyohive. July 3, 2011. . Retrieved October 6, 2011.
[17] "AKB48 :じゃんけん大会を再び開催 9月に武道館で SKE48、NMB48ら総勢71人参加" (http:/ / mantan-web. jp/ 2011/ 07/ 03/
20110703dog00m200011000c. html) (in Japanese). MANTANWEB. Mainichi Shimbun Digital Co.Ltd. July 3, 2011. . Retrieved October 6,
2011.
[18] "AKB48 Janken Tournament results for 24th single Senbatsu members!" (http:/ / www. tokyohive. com/ 2011/ 09/
live-updates-akb48-janken-tournament-results-for-24th-single-senbatsu-members/ ). Tokyohive. September 20, 2011. . Retrieved October 6,
2011.
[19] Egenfeldt-Nielsen, Simon; Jonas Heide Smith, Susana Pajares Tosca (2008). Understanding video games: the essential introduction. Taylor
& Francis. pp. 103. ISBN 0415977215.
[20] Sinervo, Barry (2001-02-20). "The rock-paper-scissors game and the evolution of alternative male strategies" (http:/ / bio. research. ucsc.
edu/ ~barrylab/ lizardland/ male_lizards. overview. html). . Retrieved 2006-08-20.
[21] Barry Sinervo on the 7th Avenue Project Radio Show. "The Games Lizards Play" (http:/ / 7thavenueproject. com/ post/ 451026680/
barry-sinervo-lizards-and-evolution). .
[22] Nature. 2002 Jul 11;418(6894):171-4
[23] Nature. 2004 Mar 25;428(6981):412-4
[24] Knoll, Byron. "Rock Paper Scissors Programming Competition" (http:/ / www. rpscontest. com). . Retrieved 2011-06-15.
[25] Dance, Gabriel and Jackson, Tom (2010-10-07). "Rock-Paper-Scissors: You vs. the Computer" (http:/ / www. nytimes. com/ interactive/
science/ rock-paper-scissors. html). The New York Times. . Retrieved 2011-06-15.
[26] "First International RoShamBo Programming Competition" (http:/ / webdocs. cs. ualberta. ca/ ~darse/ rsb-results1. html). 1999-10-01. .
Retrieved 2011-06-15.
[27] "Second International RoShamBo Programming Competition" (http:/ / webdocs. cs. ualberta. ca/ ~darse/ rsbpc. html). 2001-03-20. .
Retrieved 2011-06-15.
[28] Egnor, Dan (1999-10-01). "Iocaine Powder Explained" (http:/ / www. ofb. net/ ~egnor/ iocaine. html). . Retrieved 2011-06-15.
[29] http:/ / www. rpscontest. com/
[30] dllu (2011-06-14). "Rock Paper Scissors Programming Competition entry: DNA werfer 5 L500" (http:/ / www. rpscontest. com/ entry/
109010). . Retrieved 2011-06-15.
[31] rfw (2011-05-22). "Rock Paper Scissors Programming Competition entry: sixth-order markov chain" (http:/ / www. rpscontest. com/ entry/
34014). . Retrieved 2011-06-15.
Rock-paper-scissors 169

[32] "Game Basics" (http:/ / www. worldrps. com/ index. php?option=com_content& task=view& id=14& Itemid=31). World Rock Paper
Scissors Society. . Retrieved 2006-08-20.
[33] Hruby, Patrick (2004-12-10). "Fists fly in game of strategy" (http:/ / www. washingtontimes. com/ national/ 20041210-120729-4008r. htm).
The Washington Times. . Retrieved 2006-08-20.
[34] "2003 World Rock Paper Scissors Championship" (http:/ / www. npr. org/ templates/ story/ story. php?storyId=1477870). All Things
Considered (National Public Radio). 2003-10-24. . Retrieved 2006-08-20.
[35] "Rock, Paper, Scissors A Sport?" (http:/ / www. cbsnews. com/ stories/ 2003/ 10/ 29/ earlyshow/ contributors/ melindamurphy/ main580709.
shtml). CBS News. 2003-10-23. . Retrieved 2006-08-20.
[36] "Rock Paper Scissors contest being held" (http:/ / www. usatoday. com/ news/ offbeat/ 2003-10-27-rock-paper_x. htm). USA Today.
Associated Press. 2003-10-27. . Retrieved 2006-08-20.
[37] Park, Michael Y. (2006-03-20). "Rock, Paper, Scissors, the Sport" (http:/ / www. foxnews. com/ story/ 0,2933,188380,00. html). Fox News.
. Retrieved 2006-08-20.
[38] "Gallery" (http:/ / web. archive. org/ web/ 20060315203450/ http:/ / www. worldrps. com/ index. php?option=com_gallery2& Itemid=30).
World RPS society. 2005-11-13. Archived from the original (http:/ / www. worldrps. com/ index. php?option=com_gallery2& Itemid=30) on
2006-03-15. . Retrieved 2006-08-20.
[39] Crick, Jennifer (2005-06-13). "HAND JIVE - June 13, 2005" (http:/ / money. cnn. com/ magazines/ fortune/ fortune_archive/ 2005/ 06/ 13/
8262549/ index. htm). Money.cnn.com. . Retrieved 2009-06-05.
[40] "World RPS Society - 2004 Champion Lee Rammage crushes a pair of Scissors" (http:/ / www. stanley-paul. com/ index.
php?option=com_gallery2& Itemid=30& g2_view=core. ShowItem& g2_itemId=16). Stanley-paul.com. 2005-11-13. . Retrieved 2009-06-05.
[41] Rock Paper Scissors crowns a queen as its champ - Weird News - Canoe.ca (http:/ / cnews. canoe. ca/ CNEWS/ WeirdNews/ 2007/ 10/ 15/
4577669-sun. html)
[42] "Belfast man tops world at rock, paper, scissors | Irish Examiner" (http:/ / www. examiner. ie/ breaking/ ireland/ mhqlojkfidsn/ ).
Examiner.ie. 2008-08-27. . Retrieved 2009-06-05.
[43] "XTreme RPS Competition by Showtime Entertainment" (http:/ / www. rpsrocks. com). . Retrieved 2007-01-07.
[44] "Pub hosts UK 'rock' championship" (http:/ / news. bbc. co. uk/ 1/ hi/ england/ devon/ 8072439. stm). BBC News. 28 May 2009. . Retrieved
2009-05-28.
[45] http:/ / www. ukrockpaperscissorschampionships. com/ eventinformation. aspx
[46] http:/ / ukrockpaperscissorschampionships. com/ 2011results. aspx
[47] Sherwin, Fred. "Colonel By sets new World Record for largest rock, paper, scissors tournament" (http:/ / www. orleansonline. ca/ pages/
N2009040302. htm). Orleans Online. . Retrieved 2009-04-04.
[48] "Master Rosh's Analysis of the Final Match" (http:/ / www. usarps. com/ tourney-info/ roshs-blog/ article/ view/
master-roshs-analysis-of-the-final-match/ 97/ ). USARPS Leagues. USARPS. 2005-06-28. . Retrieved 2009-07-31.
[49] Friess, Steven (2007-05-14). "Las Vegas's latest game: Rock, paper, scissors" (http:/ / www. nytimes. com/ 2007/ 05/ 14/ world/ americas/
14iht-rock. 1. 5699920. html). NY Times. . Retrieved 2009-07-23.
[50] Levitt, Steven (2006-07-26). "Annie Duke Wins 2nd Annual World Series of Poker’s Rock, Paper, Scissors Tournament" (http:/ /
freakonomics. blogs. nytimes. com/ tag/ rock-paper-scissors/ ). New York Times. . Retrieved 2009-07-24.
[51] "Where's Annie?" (http:/ / sports. espn. go. com/ espn/ print?id=2540622& type=blogEntry). ESPN.com. 2006-08-05. . Retrieved
2009-07-24.
[52] Caldwell, John (2005-06-15). "The REAL championship at the World Series of Poker" (http:/ / www. pokernews. com/ news/ 2005/ 06/
the-real-championship-wsop. htm). Poker News. . Retrieved 2009-07-24.
[53] "WSOP Schedule Whiplash" (http:/ / news. pokerpages. com/ index. php?option=com_simpleblog& task=view& id=86). Poker Pages.
2005-06-14. . Retrieved 2009-07-24.
[54] Craig, Michael. "EXCLUSIVE COVERAGE: Roshambo - The Rematch" (http:/ / pokerworks. com/ blogs/ craigsjournal/ 2006/ 07/ 27/
exclusive-coverage-roshambo-the-rematch/ ). Pokerworks. . Retrieved 2009-07-21.

Bibliography
• Alonzo, Suzanne H. & Sinervo, Barry (2001): Mate choice games, context-dependent good genes, and genetic
cycles in the side-blotched lizard, Uta stansburiana. Behavioral Ecology Sociobiology 49 (2-3): 176–186.
doi:10.1007/s002650000265 (HTML abstract)
• Culin, Stewart (1895): Korean Games, With Notes on the Corresponding Games at China and Japan. (evidence
of nonexistence of rock-paper-scissors in the West)
• Gomme, Alice Bertha (1894, 1898): The traditional games of England, Scotland, and Ireland, 2 vols. (more
evidence of nonexistence of rock-paper-scissors in the West)
• Opie, Iona & Opie, Peter (1969): Children's Games in Street and Playground Oxford University Press, London.
(Details some variants on rock-paper-scissors such as 'Man, Earwig, Elephant' in Indonesia, and presents evidence
for the existence of 'finger throwing games' in Egypt as early as 2000 B.C.)
Rock-paper-scissors 170

• Sinervo, Barry (2001): Runaway social games, genetic cycles driven by alternative male and female strategies,
and the origin of morphs. Genetica 112-113(1): 417-434. doi:10.1023/A:1013360426789 (HTML abstract)
• Sinervo, Barry & Clobert, Jean (2003): Morphs, Dispersal Behavior, Genetic Similarity, and the Evolution of
Cooperation. Science 300(5627): 1949-1951. doi:10.1126/science.1083109 (HTML abstract) Supporting Online
Material (http://www.sciencemag.org/cgi/content/full/sci;300/5627/1949/DC1)
• Sinervo, Barry & Lively, C. M. (1996): The Rock-Paper-Scissors Game and the evolution of alternative male
strategies. Nature 380: 240-243. doi:10.1038/380240a0 (HTML abstract)
• Sinervo, Barry & Zamudio, K. R. (2001): The Evolution of Alternative Reproductive Strategies: Fitness
Differential, Heritability, and Genetic Correlation Between the Sexes. Journal of Heredity 92(2): 198-205. PDF
fulltext (http://jhered.oxfordjournals.org/cgi/reprint/92/2/198.pdf)
• Sogawa, Tsuneo (2000): Janken. Monthly Sinica 11(5). [Article in Japanese]
• Walker, Douglas & Walker, Graham (2004): The Official Rock Paper Scissors Strategy Guide. Fireside.
(strategy, tips and culture from the World Rock Paper Scissors Society).

External links
• USA Rock Paper Scissors League (http://www.usarps.com/)
• World Rock Paper Scissors Society (http://www.worldrps.com/)
• UK Rock Paper Scissors Championships (http://www.wackynation.com/)
• The official RPS movie (http://www.flyingscissors.com/)
• Abrams, Michael (2004-07-05). "Throwing for The Gold" (http://www.forbes.com/execpicks/fyi/2005/0407/
061.html). Pursuits (Forbes FYI). Retrieved 2007-04-09.
• Hegan, Ken (2004-01-07). "Hand to Hand Combat: Down and dirty at the World Rock Paper Scissors
Championship" (http://www.rollingstone.com/culture/news/hand-to-hand-combat-20040107). Rolling Stone
Feature Article. Retrieved 2009-03-30.
• Etymological origin of Janken (http://web.archive.org/web/20070718033840/http://gogensanpo.hp.
infoseek.co.jp/main2.html) (Japanese)
• About Ken games (http://www.asahi-net.or.jp/~RP9H-TKHS/dg_ken.htm) (Japanese)
• Origins of Janken (http://www001.upp.so-net.ne.jp/yasuaki/misc/cult/cultc4.htm) (Japanese)
• Janken in the world (http://www.netlaputa.ne.jp/~tokyo3/janken.html#起源/) (Japanese)
• A biological example of rock-paper-scissors: Interview with biologist Barry Sinervo on the 7th Avenue Project
Radio Show (http://7thavenueproject.com/post/451026680/barry-sinervo-lizards-and-evolution/)
• Rock Paper Scissors Programming Competition (http://www.rpscontest.com/)
Pirate game 171

Pirate game
The pirate game is a simple mathematical game. It illustrates
how, if assumptions conforming to a homo economicus model of
human behaviour hold, outcomes may be surprising. It is a
multi-player version of the ultimatum game.

The game
There are 5 rational pirates, A, B, C, D and E. They find 100 gold
coins. They must decide how to distribute them. From Howard Pyle's Book of Pirates
The pirates have a strict order of seniority: A is superior to B, who
is superior to C, who is superior to D, who is superior to E.
The pirate world's rules of distribution are thus: that the most senior pirate should propose a distribution of coins.
The pirates, including the proposer, then vote on whether to accept this distribution. If the proposed allocation is
approved by a majority or a tie vote, it happens. If not, the proposer is thrown overboard from the pirate ship and
dies, and the next most senior pirate makes a new proposal to begin the system again.
Pirates base their decisions on three factors. First of all, each pirate wants to survive. Second, given survival, each
pirate wants to maximize the number of gold coins he receives. Third, each pirate would prefer to throw another
overboard, if all other results would otherwise be equal.[1] . The pirates do not trust each other, and will neither make
nor honor any promises between pirates apart from the main proposal.

The result
It might be expected intuitively that Pirate A will have to allocate little if any to himself for fear of being voted off so
that there are fewer pirates to share between. However, this is as far from the theoretical result as is possible.
This is apparent if we work backwards: if all except D and E have been thrown overboard, D proposes 100 for
himself and 0 for E. He has the casting vote, and so this is the allocation.
If there are three left (C, D and E) C knows that D will offer E 0 in the next round; therefore, C has to offer E 1 coin
in this round to make E vote with him, and get his allocation through. Therefore, when only three are left the
allocation is C:99, D:0, E:1.
If B, C, D and E remain, B knows this when he makes his decision. To avoid being thrown overboard, he can simply
offer 1 to D. Because he has the casting vote, the support only by D is sufficient. Thus he proposes B:99, C:0, D:1,
E:0. One might consider proposing B:99, C:0, D:0, E:1, as E knows he won't get more, if any, if he throws B
overboard. But, as each pirate is eager to throw each other overboard, E would prefer to kill B, to get the same
amount of gold from C.
Assuming A knows all these things, he can count on C and E's support for the following allocation, which is the final
solution:
• A: 98 coins
• B: 0 coins
• C: 1 coin
• D: 0 coins
• E: 1 coin[1]
Also, A:98, B:0, C:0, D:1, E:1 or other variants are not good enough, as D would rather throw A overboard to get the
same amount of gold from B.
Pirate game 172

Extension
The solution follows the same general pattern for other numbers of pirates and/or coins, however the game changes
in character when it is extended beyond there being twice as many pirates as there are coins. Ian Stewart wrote about
Steve Omohundro's extension to an arbitrary number of pirates in the May 1999 edition of Scientific American and
described the rather intricate pattern that emerges in the solution.[1]

References
[1] Stewart, Ian (1999-05), "A Puzzle for Pirates" (http:/ / euclid. trentu. ca/ math/ bz/ pirates_gold. pdf), Scientific American: 98–99,

Dictator game
The dictator game is a game in experimental economics, similar to the ultimatum game. Experimental results offer
evidence against the rationally self-interested individual (sometimes called the homo economicus) concept of
economic behavior,[1] though precisely what to conclude from the evidence is controversial.[2]

Description
In the dictator game, the first player, "the proposer", determines an allocation (split) of some endowment (such as a
cash prize). The second player, "the responder", simply receives the remainder of the endowment left by the
proposer. The responder's role is entirely passive (the responder has no strategic input into the outcome of the game).
As a result, the dictator game is not formally a game at all (as the term is used in game theory). To be a game, every
player's outcome must depend on the actions of at least some others. Since the proposer's outcome depends only on
his own actions, this situation is one of decision theory and not game theory. Despite this formal point, the dictator
game is used in the game theory literature as a degenerate game.
This game has been used to test the homo economicus model of individual behavior: if individuals were only
concerned with their own economic well being, proposers (acting as dictators) would allocate the entire good to
themselves and give nothing to the responder. Experimental results have indicated that individuals often allocate
money to the responders, reducing the amount of money they receive.[3] These results appear robust: for example,
Henrich, et al. discovered in a wide cross cultural study that proposers do allocate a non-zero share of the
endowment to the responder.[4]
If these experiments appropriately reflect individuals' preferences outside of the laboratory, these results appear to
demonstrate that either:
1. Proposers fail to maximize their own expected utility,
2. Proposers' utility functions may include non-tangible harms they incur (for example self-image or anticipated
negative views of others in society), or
3. Proposers' utility functions may include benefits received by others.
Additional experiments have shown that subjects maintain a high degree of consistency across multiple versions of
the dictator game in which the cost of giving varies.[5] This suggests that dictator game behavior is, in fact, altruism
instead of the failure of optimizing behavior. Other experiments have shown a relationship between political
participation and dictator game giving, suggesting that it may be an externally valid indicator of concern for the
well-being of others.[6] [7]
Dictator game 173

Challenges
The idea that the highly mixed results of the Dictator game prove or disprove rationality in economics is not widely
accepted. Results offer both support of the classical assumptions and notable exception which have led to improved
holistic economic models of behavior. Some authors have suggested that giving in the dictator game does not entail
that individuals wish to maximize others' benefit (altruism). Instead they suggest that individuals have some negative
utility associated with being seen as greedy, and are avoiding this judgment by the experimenter. Some experiments
have been performed to test this hypothesis with mixed results.[8]
Further experiments testing experimental effects have been performed. Bardsley has performed experiments where
individuals are given the opportunity to give money, give nothing, or take money from the respondent.[2] In these
cases most individuals far from showing altruism actually take money. And comparing the taking games with
dictator games which start from the same endowments, most people who give in the dictator game would take in a
taking game. Bardsley suggests two interpretations for these results. First, it may be that the range of options
provides different cues to experimental subjects about what is expected of them. "Subjects might perceive dictator
games as being about giving, since they can either do nothing or give, and so ask themselves how much to give.
Whilst the taking game... might appear to be about taking for analogous reasons, so subjects ask themselves how
much to take."[2] On this interpretation dictator game giving is a response to demand characteristics of the
experiment. Second, subjects' behavior may be affected by a kind of framing effect. What a subject considers to be
an appropriately kind behavior depends on the range of behaviors available. In the taking game, the range includes
worse alternatives than the dictator game. As a result giving less, or even taking, may appear equally kind.

Trust game
The trust game extends the dictator game one step by having the reward that the dictator can (unilaterally) split
between himself and a partner partially decided by an initial gift from that partner. The initial move is from the
dictator's partner, who must decide how much of his or her initial endowment to trust with him (in the hopes of
receiving some of it back). Normally, he is encouraged to give something to the dictator through a specification in
the game's rules that her endowment will be increased by a factor from the researchers. The experiments rarely end
in the subgame perfect Nash equilibrium of "no trust". In fact, a recent pair of studies of identical and fraternal twins
in the USA and Sweden suggests that behavior in this game is heritable.[9]

References
[1] Levitt, Steven and Stephen Dubner (2009). Superfreakonomics New York: William Morrow.
[2] Bardsley, Nicholas. (2005) "Altruism or artifact? A Note on Dictator Game Giving" (http:/ / dx. doi. org/ 10. 1007/ s10683-007-9172-2)
CeDEx Discussion Paper No. 2005-10.
[3] For example, Bolton, Katok, Zwick 1998, "Dictator game giving: Rules of fairness versus acts of kindness" International Journal of Game
Theory 27:2 ( Article Abstract (http:/ / www. springerlink. com/ app/ home/ contribution. asp?wasp=63a7d9fd675e4a509ebc903b1c304293&
referrer=parent& backto=issue,9,11;journal,24,29;linkingpublicationresults,1:101791,1)). This paper (http:/ / lema. smeal. psu. edu/ katok/
45K5E4WC73MJF1WQ. pdf) includes a review of dictator games going back to 1994 (Forsythe R, Horowitz JL, Savin NE, Sefton M, 1994
Fairness in simple bargaining experiments. in Games and Economic Behavior). For an overview see Camerer, Colin (2003) Behavioral Game
Theory Princeton University Press, Princeton.
[4] Henrich, Joseph, Robert Boyd, Samuel Bowles, Colin Camerer, Ernst Fehr, and Herbert Gintis (2004) Foundations of Human Sociality:
Economic Experiments and Ethnographic Evidence from Fifteen Small-Scale Societies. Oxford University Press.
[5] Andreoni, J. and Miller, J. "Giving According to GARP: An Experimental Test of the Consistency of Preferences for Altruism."
Econometrica 70:2, 737-753.
[6] Fowler JH, Kam CD "Beyond the Self: Altruism, Social Identity, and Political Participation," Journal of Politics 69 (3): 811-825 (August
2007)
[7] Fowler JH. "Altruism and Turnout," Journal of Politics 68 (3): 674-683 (August 2006)
[8] Hoffman Elizabeth, McCabe Kevin, Shachat Keith and Smith Vernon (1994) "Preferences, Property Rights, and Anonymity in Bargaining
Games" Games and Economic Behavior 7(3): 346-380 and Bolton, Gary E., Elena Katok, and Rami Zwick (1998) "Dictator game giving:
Rules of fairness versus acts of kindness" (http:/ / lema. smeal. psu. edu/ katok/ 45K5E4WC73MJF1WQ. pdf) International Journal of Game
Theory 27:269-299.
Dictator game 174

[9] Cesarini, David; Christopher T. Dawes, James H. Fowler, Magnus Johannesson, Paul Lichtenstein, Björn Wallace (11 March 2008).
"Heritability of cooperative behavior in the trust game" (http:/ / jhfowler. ucsd. edu/ heritability_of_cooperative_behavior. pdf) (PDF).
Proceedings of the National Academy of Sciences 105 (10): 3721–3726. doi:10.1073/pnas.0710069105. PMC 2268795. PMID 18316737. .

Further reading
• Haley, K.; D. Fessler (2005). "Nobody's watching? Subtle cues affect generosity in an anonymous economic
game". Evolution and Human Behaviour 26 (3): 245–256. doi:10.1016/j.evolhumbehav.2005.01.002. Concludes
that people tend to be more generous if there is a picture of a pair of eyes watching them.
• For a recent review of the dictator game in experiments see Angela A. Stanton: Evolving Economics: Synthesis
(http://ideas.repec.org/p/pra/mprapa/767.html)

Public goods game


The Public goods game is a standard of experimental economics. In the basic game, subjects secretly choose how
many of their private tokens to put into the public pot. The tokens in the pot are multiplied by a factor (>1) and this
"public good" payoff is evenly divided among players. Each subject also keeps the tokens they do not contribute.

Results
The group's total payoff is maximized when everyone contributes all of their tokens to the public pool. However, the
Nash equilibrium in this game is simply zero contributions by all; if the experiment were a purely analytical exercise
in game theory it would resolve to zero contributions because any rational agent does best contributing zero,
regardless of whatever anyone else does.[1]
In fact, the Nash equilibrium is rarely seen in experiments; people do tend to add something into the pot. The actual
levels of contribution found varies widely (anywhere from 0% to 100% of initial endowment can be chipped in).[2]
Those who contribute nothing are called "defectors" or "free riders", as opposed to the contributors who are called
"cooperators".[1]

Variants

Iterated public goods games


"Repeat-play" public goods games involve the same group of subjects playing the basic game over a series of
rounds. The typical result is a declining proportion of public contribution, from the simple game (the "One-shot"
public goods game). When trusting contributors see that not everyone is giving up as much as they do they tend to
reduce the amount they share in the next round. If this is again repeated the same thing happens but from a lower
base, so that the amount contributed to the pot is reduced again. However, the amount contributed to the pool rarely
drops to zero when rounds of the game are iterated, because there tend to remain a hard core of ‘givers’.
One explanation for the dropping level of contribution is inequity aversion; once it is realized that others are
receiving a bigger share for a smaller contribution the sharing members react against the perceived injustice (even
though the identity of the “free riders” are unknown, and it’s only a game).
Those who contribute nothing in one round, rarely contribute something in later rounds, even after discovering that
others are.
Public goods game 175

Open public goods games


If the amount contributed isn't hidden it tends to be higher. In a typical public good games [3] there might be six
subjects contributing to the pot so concealing the level of contribution isn't difficult. In "pairwise iterations" with
only two players the other player's contribution level is always known.

Public goods games with punishment


The option to punish non-contributors after a round of the public goods game is widely exercised, even at a cost. In
most experiments this leads to greater group cooperation, and fewer defections in subsequent rounds.[4]

Public goods games with reward


The option to reward co-operation (rather than punish defection) is less often exercised by players, but some studies
have shown that it can be more effective at enforcing co-operation than punishing.[5] The evidence comparing
reward with punishment is mixed; a 2007 study found that rewards could not sustain long-term cooperation.[6]

Income variation
A public goods games variant suggested as an improvement for researching the free rider problem is one in which
token income can vary among players; the standard game (with a fixed initial endowments) allows no work effort
variation and cannot capture the marginal substitutions among three factors: private goods, public goods, and
leisure.[7]

Multiplication factor
In order for contribution to be privately "irrational" the tokens in the pot must be multiplied by an amount smaller
than the number of players and greater than 1. Other than this, the level of multiplication has little bearing on
strategy, but higher factors produce higher proportions of contribution.
With a large group (40) and very low multiplication factor (1.03) almost no-one contributes anything after a few
iterations of the game (a few still do). However, with the same size group and a 1.3 multiplication factor the average
level of initial endowment contributed to the pot is around 50%.[8]

Implications
The name of the game comes from economist’s definition of a “public good”. One type of public good is a costly,
"non-excludable" project that every one can benefit from, regardless of how much they contribute to create it
(because no one can be excluded from using it - like street lighting). Part of the economic theory of public goods is
that they would be under-provided (at a rate lower than the ‘social optimum’) because individuals had no private
motive to contribute (the free rider problem). The “public goods game” is designed to test this belief and connected
theories of social behaviour.
Public goods game 176

Game theory
The empirical fact that subjects in most societies contribute anything in the simple public goods game is a challenge
for game theory to explain via a motive of total self-interest, although it can do better with the ‘punishment’ variant,
or the ‘iterated’ variant; because some of the motivation to contribute is now purely “rational”, if players assume that
others may act irrationally and punish them for non-contribution.

Applications to sociology
The sociological interpretation of these results emphasizes group cohesion, and cultural norms to explain the
"prosocial" outcomes of Public goods games.

References
[1] Hauert, C. (January 2005). "Public Goods Games" (http:/ / www. univie. ac. at/ virtuallabs/ PublicGoods/ index. html#pgg). University of
Vienna. . "groups of rational players will forego the public good and are thus unable to increase their initial endowment. This leads to a
deadlock in a state of mutual defection and economic stalemate."
[2] Janssen, M.; Ahn, T. K. (2003-09-27). "Adaptation vs. Anticipation in Public-Good Games" (http:/ / citation. allacademic. com/ meta/
p_mla_apa_research_citation/ 0/ 6/ 4/ 8/ 2/ pages64827/ p64827-1. php). American Political Science Association. . Retrieved 2011-10-03. -
(This paper, from researchers at Indiana University and Florida State University summarizes the experimental findings of earlier research
before comparing theoretical models against these results.)
[3] http:/ / goodarcadegames. com
[4] Andreoni, James; Harbaugh, William; Vesterlund, Lise (2003). "The Carrot or the Stick: Rewards, Punishments, and Cooperation". The
American Economic Review, 93 (3): 901. "punishments improved cooperation by eliminating extremely selfish offers, pushing proposers in
the Stick treatment to modest degrees of cooperation."
[5] Rand, 2009 Positive Interactions Promote Public Cooperation. Science. 2009, 325
[6] Sefton, M.; Shupp, R.; Walker, J. M. (2007-04-16). "The Effect of Rewards and Sanctions in Provision of Public Goods". Economic Inquiry
45 (4): 671–690.
[7] Graves, P. E. (September 2010). "A Note on the Design of Experiments Involving Public Goods" (http:/ / papers. ssrn. com/ sol3/ papers.
cfm?abstract_id=1687570). .
[8] Isaac, Walker, and Williams, 1994 Group Size and the Voluntary Provision of Public Goods: Experimental Evidence Utilizing Large Groups.
Journal of Public Economics, 54(1)

External links
• The evolution of strategies (http://www.univie.ac.at/virtuallabs/PublicGoods/mixed.html) in public goods
games: Three interactive simulations of the spread of defection among automated players choosing between
strategies. (This is a model of experimental economics, rather than an actual experiment.)
• Voluntary Participation and Spite in Public Good Provision Experiments: An International Comparison (http://
www.krannert.purdue.edu/faculty/cason/papers/intspite.pdf). This is an Economic Science Association paper
from 2002 detailing the methodology and results used in an experiment comparing the performance of Japanese
and American subjects in public goods games. They reject the hypothesis of international equality in overall
efficiency:
The mean contribution rate among the 60 Japanese subjects was 80%
The mean contribution rate among the 39 American subjects was 69%
Blotto games 177

Blotto games
Blotto games (or Colonel Blotto games) constitute a class of two-person zero-sum games in which the players are
tasked to simultaneously distribute limited resources over several objects (or battlefields). In the classic version of
the game, the player devoting the most resources to a battlefield wins that battlefield, and the gain (or payoff) is then
equal to the total number of battlefields won.
Though the Colonel Blotto game was first proposed by Borel[1] in 1921, most variations of the classic game
remained unsolved for 85 years. In 2006, Roberson described the equilibrium payoffs to the classic game for any
number of battlefields, and any level of relative resources, as well as characterizing the set of equilibrium to most
versions of the classic game.[2]
The game is named after the fictional Colonel Blotto from Gross and Wagner's 1950[3] paper. The Colonel was
tasked with finding the optimum distribution of his soldiers over N battlefields knowing that:
1. on each battlefield the party that has allocated the most soldiers will win, but
2. both parties do not know how many soldiers the opposing party will allocate to each battlefield, and:
3. both parties seek to maximize the number of battlefields they expect to win.

Example
As an example Blotto game, consider the game in which two players each write down three positive integers in
non-decreasing order and such that they add up to a pre-specified number S. Subsequently, the two players show
each other their writings, and compare corresponding numbers. The player who has two numbers higher than the
corresponding ones of the opponent wins the game.
For S = 6 only three choices of numbers are possible: (2, 2, 2), (1, 2, 3) and (1, 1, 4). It is easy to see that:
Any triplet against itself is a draw
(1, 1, 4) against (1, 2, 3) is a draw
(1, 2, 3) against (2, 2, 2) is a draw
(2, 2, 2) beats (1, 1, 4)
It follows that the optimum strategy is (2, 2, 2) as it does not do worse than breaking even against any other strategy
while beating one other strategy. There are however several Nash equilibria. If both players choose the strategy (2, 2,
2) or (1, 2, 3), then none of them can beat the other one by changing strategies, so every such strategy pair is a Nash
equilibrium.
For larger S the game becomes progressively more difficult to analyse. For S = 12, it can be shown that (2, 4, 6)
represents the optimal strategy, while for S > 12, deterministic strategies fail to be optimal. For S = 13, choosing (3,
5, 5), (3, 3, 7) and (1, 5, 7) with probability 1/3 each can be shown to be the optimal probabilistic strategy.
Blotto games 178

Application
The 2000 US presidential election, one of the closest races in recent history, has been modeled as a Colonel Blotto
game.[4] It is argued that Gore could have utilized a strategy that would have won the election, but that such a
strategy was not identifiable ex ante.

External links
• Ayala Arad and Ariel Rubinstein's article Colonel Blotto's Top secret Files: Multi-Dimensional Iterative
Reasoning in Action [5]
• Jonathan Partington's Colonel Blotto page [6]

References
[1] The Theory of Play and Integral Equations with Skew Symmetric Kernels (http:/ / www. jstor. org/ stable/ 1906946)
[2] The Colonel Blotto game (http:/ / www. springerlink. com/ index/ f22232k0j13816r8. pdf)
[3] A Continuous Colonel Blotto Game (http:/ / www. rand. org/ pubs/ research_memoranda/ 2006/ RM408. pdf)
[4] Lotto, Blotto, or Frontrunner: An Analysis of Spending Patterns by the National Party Committees in the 2000 Presidential Election (http:/ /
www. socsci. duke. edu/ ssri/ federalism/ papers/ tofiasmunger. pdf)
[5] http:/ / arielrubinstein. tau. ac. il/ papers/ generals. pdf
[6] http:/ / www. amsta. leeds. ac. uk/ ~pmt6jrp/ personal/ blotto. html

War of attrition
In game theory, the war of attrition is a model of aggression in which two contestants compete for a resource of
value V by persisting while constantly accumulating costs over the time t that the contest lasts. The model was
originally formulated by John Maynard Smith[1] , a mixed evolutionary stable strategy (ESS) was determined by
Bishop & Cannings[2] . Strategically, the game is an auction, in which the prize goes to the player with the highest
bid, and each player pays the loser's low bid (making it an all-pay sealed-bid second-price auction).

Examining the game


The war of attrition cannot be properly solved using the payoff matrix. The players' available resources are the only
limit to the maximum value of bids; bids can be any number if available resources are ignored, meaning that for any
value of α, there is a value β that is greater. Attempting to put all possible bids onto the matrix, however, will result
in an ∞×∞ matrix. One can, however, use a pseudo-matrix form of war of attrition to understand the basic workings
of the game, and analyze some of the problems in representing the game in this manner.
The game works as follows: Each player makes a bid; the one who bids the highest wins a resource of value V. Each
player pays the lowest bid, a.
The premise that the players may bid any number is important to analysis of the game. The bid may even exceed the
value of the resource that is contested over. This at first appears to be irrational, being seemingly foolish to pay more
for a resource than its value; however, remember that each bidder only pays the low bid. Therefore, it would seem to
be in each player's best interest to bid the maximum possible amount rather than an amount equal to or less than the
value of the resource.
There is a catch, however; if both players bid higher than V, the high bidder does not so much win as lose less. This
situation is commonly referred to as a Pyrrhic victory. In contrast, if each player bids less than V, the player bidding
a will lose, and the other player will benefit by an amount of V-a. If each player bids the same amount for a less than
V/2, they split the value of V, each gaining V/2-a. For a tie such that a>V/2, they both lose the difference of V/2 and
a. Luce and Raiffa
War of attrition 179

(cite_Luce.2C_R._D._and_Raiffa.2C_H._.281957.29._Games_and_Decisions:_Introduction_and_Critical_Survey._Wiley.2C_New_York._Paperback_reprint.2C_Do
referred to the latter situation as a "ruinous situation"; the point at which both players suffer, and there is no winner.
The conclusion one can draw from this pseudo-matrix is that there is no value to bid which is beneficial in all cases,
so there is no dominant strategy. However, this fact and the above argument do not preclude the existence of Nash
Equilibria. Any pair of strategies with the following characteristics is a Nash Equilibrium:
• One player bids zero
• The other player bids any value equal to V or higher, or mixes among any values V or higher.
With these strategies, one player wins and pays zero, and the other player loses and pays zero. It is easy to verify that
neither player can strictly gain by unilaterally deviating.

Dynamic formulation and Evolutionary stable strategy


Another popular formulation of the war of attrition is as follows: Two players are involved in a dispute. The value of
the object to each player is . Time is modeled as a continuous variable which starts at zero and runs
indefinitely. Each player chooses when to concede the object to the other player. In the case of a tie, each player
receives utility. Time is valuable, each player uses one unit of utility per period of time. This formulation is
slightly more complex since it allows each player to assign a different value to the object. Its equilibria are not as
obvious as the other formulation.The evolutionary stable strategy is a mixed ESS, in which the probability of
persisting for a length of time t is:
The evolutionary stable strategy below represents the most probable value of a. The value p(t) for a contest with a
resource of value V over time t, is the probability that t = a. This strategy does not guarantee the win; rather it is the
optimal balance of risk and reward. The outcome of any particular game cannot be predicted as the random factor of
the opponent's bid is too unpredictable.

That no pure persistence time is an ESS can be demonstrated simply by considering a putative ESS bid of x, which
will be beaten by a bid of x+ .

The ESS in popular culture


The evolutionarily stable strategy when playing this game is a probability density of random persistence times which
cannot be predicted by the opponent in any particular contest. This result has led to the prediction that threat displays
ought not to evolve, and to the conclusion in The Illuminatus! Trilogy that optimal military strategy is to behave in a
completely unpredictable, and therefore insane, manner. Neither of these conclusions appear to be truly quantifiably
reasonable applications of the model to realistic conditions.

Conclusions
By examining the unusual results of this game, it serves to mathematically prove another piece of old wisdom:
"Expect the unexpected". By making the assumption that an opponent will act irrationally, one can paradoxically
better predict their actions, as they are limited in this game. They will either act rationally, and take the optimal
decision, or they will be irrational, and take the non-optimal solution. If one considers the irrational as a bluff and the
rational as backing down from a bluff, it transforms the game into another game theory game, Hawk and Dove.
War of attrition 180

References
[1] Maynard Smith, J. (1974) Theory of games and the evolution of animal contests. Journal of Theoretical Biology 47: 209-221.
[2] Bishop, D.T. & Cannings, C. (1978) A generalized war of attrition. Journal of Theoretical Biology 70: 85-124.

Sources
• Bishop, D.T., Cannings, C. & Maynard Smith, J. (1978) The war of attrition with random rewards. Journal of
Theoretical Biology 74:377-389.
• Maynard Smith, J. & Parker, G. A. (1976). The logic of asymmetric contests. Animal Behaviour. 24:159-175.
• Luce,R.D. & Raiffa, H. (1957) "Games and Decisions: Introduction and Critical Survey"(originally published as
"A Study of the Behavioral Models Project, Bureau of Applied Social Research") John Wiley & Sons Inc., New
York
• Rapaport,Anatol (1966) "Two Person Game Theory" University of Michigan Press, Ann Arbor

External links
• Exposition of the derivation of the ESS (http://www.holycross.edu/departments/biology/kprestwi/behavior/
ESS/warAtt_mixESS.html) - From Ken Prestwich's Game Theory website at College of the Holy Cross

El Farol Bar problem


The El Farol bar problem is a problem in game theory. Based on
a bar in Santa Fe, New Mexico, it was created in 1994 by W. Brian
Arthur.
The problem is as follows: There is a particular, finite population
of people. Every Thursday night, all of these people want to go to
the El Farol Bar. However, the El Farol is quite small, and it's no
fun to go there if it's too crowded. So much so, in fact, that the
preferences of the population can be described as follows:
• If less than 60% of the population go to the bar, they'll all have
a better time than if they stayed at home. El Farol in Santa Fe, New MexicoSanta Fe

• If more than 60% of the population go to the bar, they'll all


have a worse time than if they stayed at home.
Unfortunately, it is necessary for everyone to decide at the same time whether they will go to the bar or not. They
cannot wait and see how many others go on a particular Thursday before deciding to go themselves on that
Thursday.
One aspect of the problem is that, no matter what method each person uses to decide if they will go to the bar or not,
if everyone uses the same pure strategy it is guaranteed to fail. If everyone uses the same deterministic method, then
if that method suggests that the bar will not be crowded, everyone will go, and thus it will be crowded; likewise, if
that method suggests that the bar will be crowded, nobody will go, and thus it will not be crowded. Often the
solution to such problems in game theory is to permit each player to use a mixed strategy, where a choice is made
with a particular probability. In the case of the single-stage El Farol Bar problem, there exists a unique symmetric
Nash equilibrium mixed strategy where all players choose to go to the bar with a certain probability that is a function
of the number of players, the threshold for crowdedness, and the relative utility of going to a crowded or an
uncrowded bar compared to staying home. There are also multiple Nash equilibria where one or more players use a
pure strategy, but these equilibria are not symmetric[1] Several variants are considered in [2] .
El Farol Bar problem 181

In some variants of the problem, the people are allowed to communicate with each other before deciding to go to the
bar. However, they are not required to tell the truth.

Minority Game
One variant of the El Farol Bar problem is the minority game proposed by Yi-Cheng Zhang and Damien Challet
from the University of Fribourg. In the minority game, an odd number of players each must choose one of two
choices independently at each turn.
The players who end up on the minority side win. While the El Farol Bar problem was originally formulated to
analyze a decision-making method other than deductive rationality, the minority game examines the characteristic of
the game that no single deterministic strategy may be adopted by all participants in equilibrium. Allowing for mixed
strategies in the single-stage minority game produces a unique symmetric Nash equilibrium, which is for each player
to choose each action with 50% probability, as well as multiple equilibria that are not symmetric.
The minority game was featured in the manga Liar Game. In that multi-stage minority game, the majority was
eliminated from the game until only one player was left. Players were shown engaging in cooperative strategies.

External links
• An Introductory Guide to the Minority Game [3]
• Minority game on arxiv.org [4]
• El Farol bar in Santa Fe, New Mexico [5]
• Software for Minority Games modelling [6]

References
• W. Brian Arthur, “Inductive Reasoning and Bounded Rationality” [7], American Economic Review (Papers and
Proceedings), 84,406–411, 1994.
[1] Whitehead, Duncan. " The El Farol Bar Problem Revisited: Reinforcement Learning in a Potential Game (http:/ / www. econ. ed. ac. uk/
papers/ The El Farol Bar Problem Revisited. pdf)", University of Edinburgh, September 17, 2008
[2] Gintis, Herbert. Game Theory Evolving (Princeton: Princeton University Press, 2009), Section 6.24: El Farol, p. 134
[3] http:/ / markov. uc3m. es/ last-papers/ the-minority-game-an-introductory-guide. html
[4] http:/ / xstructure. inr. ac. ru/ x-bin/ theme3. py?level=1& index1=117820
[5] http:/ / elfarolsf. com
[6] http:/ / agf. statsolutions. eu
[7] http:/ / www. santafe. edu/ arthur/ Papers/ El_Farol. html
Fair division 182

Fair division
Fair division, also known as the cake-cutting problem, is the problem of dividing a resource in such a way that all
recipients believe that they have received a fair amount. The problem is easier when recipients have different
measures of value of the parts of the resource: in the "cake cutting" version, one recipient may like marzipan, another
prefers cherries, and so on—then, and only then, the n recipients may get even more than what would be one n-th of
the value of the "cake" for each of them. On the other hand, the presence of different measures opens a vast potential
for many challenging questions and directions of further research.
There are a number of variants of the problem. The definition of 'fair' may simply mean that they get at least their
fair proportion, or harder requirements like envy-freeness may also need to be satisfied. The theoretical algorithms
mainly deal with goods that can be divided without losing value. The division of indivisible goods, as in for instance
a divorce, is a major practical problem. Chore division is a variant where the goods are undesirable.
Fair division is often used to refer to just the simplest variant. That version is referred to here as proportional division
or simple fair division.
Most of what is normally called a fair division is not considered so by the theory because of the use of arbitration.
This kind of situation happens quite often with mathematical theories named after real life problems. The decisions
in the Talmud on entitlement when an estate is bankrupt reflect some quite complex ideas about fairness,[1] and most
people would consider them fair. However they are the result of legal debates by rabbis rather than divisions
according to the valuations of the claimants.

Assumptions
Fair division is a mathematical theory based on an idealization of a
real life problem. The real life problem is the one of dividing
goods or resources fairly between people, the 'players', who have
an entitlement to them. The central tenet of fair division is that
such a division should be performed by the players themselves,
maybe using a mediator but certainly not an arbiter as only the
players really know how they value the goods.

The theory of fair division provides explicit criteria for various


Berlin divided by the Potsdam Conference
different types of fairness. Its aim is to provide procedures
(algorithms) to achieve a fair division, or prove their impossibility,
and study the properties of such divisions both in theory and in real life.
The assumptions about the valuation of the goods or resources are:
• Each player has their own opinion of the value of each part of the goods or resources
• The value to a player of any allocation is the sum of his valuations of each part. Often just requiring the valuations
be weakly additive is enough.
• In the basic theory the goods can be divided into parts with arbitrarily small value.
Indivisible parts make the theory much more complex. An example of this would be where a car and a motorcycle
have to be shared. This is also an example of where the values may not add up nicely, as either can be used as
transport. The use of money can make such problems much easier.
The criteria of a fair division are stated in terms of a players valuations, their level of entitlement, and the results of a
fair division procedure. The valuations of the other players are not involved in the criteria. Differing entitlements can
normally be represented by having a different number of proxy players for each player but sometimes the criteria
specify something different.
Fair division 183

In the real world of course people sometimes have a very accurate idea of how the other players value the goods and
they may care very much about it. The case where they have complete knowledge of each other's valuations can be
modeled by game theory. Partial knowledge is very hard to model. A major part of the practical side of fair division
is the devising and study of procedures that work well despite such partial knowledge or small mistakes.
A fair division procedure lists actions to be performed by the players in terms of the visible data and their valuations.
A valid procedure is one that guarantees a fair division for every player who acts rationally according to their
valuation. Where an action depends on a player's valuation the procedure is describing the strategy a rational player
will follow. A player may act as if a piece had a different value but must be consistent. For instance if a procedure
says the first player cuts the cake in two equal parts then the second player chooses a piece, then the first player
cannot claim that the second player got more.
What the players do is:
• Agree on their criteria for a fair division
• Select a valid procedure and follow its rules
It is assumed the aim of each player is to maximize the minimum amount they might get, or in other words, to
achieve the maximin.
Procedures can be divided into finite and continuous procedures. A finite procedure would for instance only involve
one person at a time cutting or marking a cake. Continuous procedures involve things like one player moving a knife
and the other saying stop. Another type of continuous procedure involves a person assigning a value to every part of
the cake.

Criteria for a fair division


There are a number of widely used criteria for a fair division. Some of these conflict with each other but often they
can be combined. The criteria described here are only for when each player is entitled to the same amount.
• A proportional or simple fair division guarantees each player gets his fair share. For instance if three people
divide up a cake each gets at least a third by their own valuation.
• An envy-free division guarantees no-one will want somebody else's share more than their own.
• An exact division is one where every player thinks everyone received exactly their fair share, no more and no
less.
• An efficient or Pareto optimal division ensures no other allocation would make someone better off without
making someone else worse off. The term efficiency comes from the economics idea of the efficient market. A
division where one player gets everything is optimal by this definition so on its own this does not guarantee even
a fair share.
• An equitable division is one where the proportion of the cake a player receives by their own valuation is the same
for every player. This is a difficult aim as players need not be truthful if asked their valuation.
Fair division 184

Two players
For two people there is a simple solution which is commonly employed. This is the so-called divide and choose
method. One person divides the resource into what they believe are equal halves, and the other person chooses the
"half" they prefer. Thus, the person making the division has an incentive to divide as fairly as possible: for if they do
not, they will likely receive an undesirable portion. This solution gives a proportional and envy-free division.
The article on divide and choose describes why the procedure is not equitable. More complex procedures like the
adjusted winner procedure are designed to cope with indivisible goods and to be more equitable in a practical
context.
Austin's moving-knife procedure[2] gives an exact division for two players. The first player places two knives over
the cake such that one knife is at the left side of the cake, and one is further right; half of the cake lies between the
knives. He then moves the knives right, always ensuring there is half the cake – by his valuation – between the
knives. If he reaches the right side of the cake, the leftmost knife must be where the rightmost knife started off. The
second player stops the knives when he thinks there is half the cake between the knives. There will always be a point
at which this happens, because of the intermediate value theorem.
The surplus procedure (SP) achieves a form of equitability called proportional equitability. This procedure is strategy
proof and can be generalized to more than two people.[3]

Many players
Fair division with three or more players is considerably more complex than the two player case.
Proportional division is the easiest and the article describes some procedures which can be applied with any number
of players. Finding the minimum number of cuts needed is an interesting mathematical problem.
Envy-free division was first solved for the 3 player case in 1960 independently by John Selfridge of Northern Illinois
University and John Horton Conway at Cambridge University. The best algorithm uses at most 5 cuts.
The Brams-Taylor procedure was the first cake-cutting procedure for four or more players that produced an
envy-free division of cake for any number of persons and was published by Steven Brams and Alan Taylor in
1995.[4] This number of cuts that might be required by this procedure is unbounded. A bounded moving knife
procedure for 4 players was found in 1997.
There are no discrete algorithms for an exact division even for two players, a moving knife procedure is the best that
can be done. There are no exact division algorithms for 3 or more players but there are 'near exact' algorithms which
are also envy-free and can achieve any desired degree of accuracy.
A generalization of the surplus procedure called the equitable procedure (EP) achieves a form of equitability.
Equitability and envy-freeness can be incompatible for 3 or more players.[3]

Variants
Some cake-cutting procedures are discrete, whereby players make cuts with a knife (usually in a sequence of steps).
Moving-knife procedures, on the other hand, allow continuous movement and can let players call "stop" at any point.
A variant of the fair division problem is chore division: this is the "dual" to the cake-cutting problem in which an
undesirable object is to be distributed amongst the players. The canonical example is a set of chores that the players
between them must do. Note that "I cut, you choose" works for chore division. A basic theorem for many person
problems is the Rental Harmony Theorem by Francis Su.[5] An interesting application of the Rental Harmony
Theorem can be found in the international trade theory.[6]
Sperner's Lemma can be used to get as close an approximation as desired to an envy-free solutions for many players.
The algorithm gives a fast and practical way of solving some fair division problems.[7] [8] [9]
Fair division 185

The division of property, as happens for example in divorce or inheritance, normally contains indivisible items
which must be fairly distributed between players, possibly with cash adjustments (such pieces are referred to as
atoms).
A common requirement for the division of land is that the pieces be connected, i.e. only whole pieces and not
fragments are allowed. For example the division of Berlin after World War 2 resulted in four connected parts.[10] A
consensus halving is where a number of people agree that a resource has been evenly split in two, this is described in
exact division.

History
According to Sol Garfunkel, the cake-cutting problem had been one of the most important open problems in 20th
century mathematics,[11] when the most important variant of the problem was finally solved with the Brams-Taylor
procedure by Steven Brams and Alan Taylor in 1995.
Divide and choose has probably been used since prehistory . The related activities of bargaining and barter are also
ancient. Negotiations involving more than two people are also quite common, the Potsdam Conference is a notable
recent example.
The theory of fair division dates back only to the end of the second world war. It was devised by a group of Polish
mathematicians, Hugo Steinhaus, Bronisław Knaster and Stefan Banach, who used to meet in the Scottish Café in
Lvov (then in Poland). A proportional (fair division) division for any number of players called 'last-diminisher' was
devised in 1944. This was attributed to Banach and Knaster by Steinhaus when he made the problem public for the
first time at a meeting of the Econometric Society in Washington D.C. on 17 September 1947. At that meeting he
also proposed the problem of finding the smallest number of cuts necessary for such divisions.
Envy-free division was first solved for the 3 player case in 1960 independently by John Selfridge of Northern Illinois
University and John Horton Conway at Cambridge University, the algorithm was first published in the 'Mathematical
Games' column by Martin Gardner in Scientific American.
Envy-free division for 4 or more players was a difficult open problem of the twentieth century. The first cake-cutting
procedure that produced an envy-free division of cake for any number of persons was first published by Steven
Brams and Alan Taylor in 1995.
A major advance on equitable division was made in 2006 by Steven J. Brams, Michael A. Jones, and Christian
Klamler.[3]

In popular culture
• In Numb3rs season 3 episode "One Hour", Charlie talks about the cake-cutting problem as applied to the amount
of money a kidnapper was demanding.
• Hugo Steinhaus wrote about a number of variants of fair division in his book Mathematical Snapshots. In his
book he says a special three-person version of fair division was devised by G. Krochmainy in Berdechów in 1944
and another by Mrs L Kott.[12]
• Martin Gardner and Ian Stewart have both published books with sections about the problem.[13] [14] Martin
Gardner introduced the chore division form of the problem. Ian Stewart has popularized the fair division problem
with his articles in Scientific American and New Scientist.
• A Dinosaur Comics strip is based on the cake-cutting problem.[15]
Fair division 186

References
[1] Game Theoretic Analysis of a bankruptcy Problem from the Talmud (http:/ / www. elsevier. com/ framework_aboutus/ Nobel/ Nobel2005/
nobel2005pdfs/ aum16. pdf) Robert J. Aumann and Michael Maschler. Journal of Economic Theory 36, 195-213 (1985)
[2] A.K. Austin. Sharing a Cake. Mathematical Gazette 66 1982
[3] Brams, Steven J.; Michael A. Jones and Christian Klamler (December 2006). "Better Ways to Cut a Cake" (http:/ / www. ams. org/ notices/
200611/ fea-brams. pdf) (PDF). Notices of the American Mathematical Society 53 (11): pp.1314–1321. . Retrieved 2008-01-16.
[4] Steven J. Brams; Alan D. Taylor (January 1995). "An Envy-Free Cake Division Protocol". The American Mathematical Monthly
(Mathematical Association of America) 102 (1): 9–18. doi:10.2307/2974850. JSTOR 2974850.
[5] Francis Edward Su (1999). "Rental Harmony: Sperner's Lemma in Fair Division" (http:/ / www. math. hmc. edu/ ~su/ papers. dir/ rent. pdf).
Amer. Math. Monthly 106 (10): 930–942. doi:10.2307/2589747. .
[6] Shiozawa, Y. A (2007). "New Construction ofa Ricardian Trade Theory". Evolutionary and Institutional Economics Review 3 (2): 141–187.
[7] Francis Edward Su. Cited above. (based on work by Forest Simmons 1980)
[8] "The Fair Division Calculator" (http:/ / www. math. hmc. edu/ ~su/ fairdivision/ calc/ ). .
[9] Ivars Peterson (March 13, 2000). "A Fair Deal for Housemates" (http:/ / www. maa. org/ mathland/ mathtrek_3_13_00. html). MathTrek. .
[10] Steven J. Brams; Alan D. Taylor (1996). Fair division: from cake-cutting to dispute resolution. Cambridge University Press. p. 38.
ISBN 978-0521556446.
[11] Sol Garfunkel. More Equal than Others: Weighted Voting. For All Practical Purposes. COMAP. 1988
[12] Mathematical Snapshots. H.Steinhaus. 1950, 1969 ISBN 0-19-503267-5
[13] aha! Insight. Martin. Gardner, 1978. ISBN ISBN 978-0716710172
[14] How to cut a cake and other mathematical conundrums. Ian Stewart. 2006. ISBN 978-0199205905
[15] http:/ / www. qwantz. com/ archive/ 001345. html

Further reading
• Steven J. Brams and Alan D. Taylor (1996). Fair Division - From cake-cutting to dispute resolution Cambridge
University Press. ISBN 0-521-55390-3
• T.P. Hill (2000). "Mathematical devices for getting a fair share", American Scientist, Vol. 88, 325-331.
• Jack Robertson and William Webb (1998). Cake-Cutting Algorithms: Be Fair If You Can, AK Peters Ltd, . ISBN
1-56881-076-8.

External links
• Short essay about the cake-cutting problem (http://3quarksdaily.blogs.com/3quarksdaily/2005/04/
3qd_monday_musi.html) by S. Abbas Raza of 3 Quarks Daily.
• Fair Division (http://www.colorado.edu/education/DMP/fair_division.html) from the Discrete Mathematics
Project at the University of Colorado at Boulder.
• The Fair Division Calculator (http://www.math.hmc.edu/~su/fairdivision/calc/) (Java applet) at Harvey
Mudd College
• Fair Division: Method of Lone Divider (http://www.cut-the-knot.org/Curriculum/SocialScience/LoneDivider.
shtml)
• Fair Division: Method of Markers (http://www.cut-the-knot.org/Curriculum/SocialScience/Markers.shtml)
• Fair Division: Method of Sealed Bids (http://www.cut-the-knot.org/Curriculum/SocialScience/SealedBids.
shtml)
• Vincent P. Crawford (1987). "fair division," The New Palgrave: A Dictionary of Economics, v. 2, pp. 274–75.
• Hal Varian (1987). "fairness," The New Palgrave: A Dictionary of Economics, v. 2, pp. 275–76.
• Bryan Skyrms (1996). The Evolution of the Social Contract Cambridge University Press. ISBN 9780521555838
Cournot competition 187

Cournot competition
Cournot competition is an economic model used to describe an industry structure in which companies compete on
the amount of output they will produce, which they decide on independently of each other and at the same time. It is
named after Antoine Augustin Cournot[1] (1801-1877) who was inspired by observing competition in a spring water
duopoly. It has the following features:
• There is more than one firm and all firms produce a homogeneous product, i.e. there is no product differentiation;
• Firms do not cooperate, i.e. there is no collusion;
• Firms have market power, i.e. each firm's output decision affects the good's price;
• The number of firms is fixed;
• Firms compete in quantities, and choose quantities simultaneously;
• The firms are economically rational and act strategically, usually seeking to maximize profit given their
competitors' decisions.
An essential assumption of this model is the "not conjecture" that each firm aims to maximize profits, based on the
expectation that its own output decision will not have an effect on the decisions of its rivals. Price is a commonly
known decreasing function of total output. All firms know , the total number of firms in the market, and take the
output of the others as given. Each firm has a cost function . Normally the cost functions are treated as
common knowledge. The cost functions may be the same or different among firms. The market price is set at a level
such that demand equals the total quantity produced by all firms. Each firm takes the quantity set by its competitors
as a given, evaluates its residual demand, and then behaves as a monopoly.

Graphically finding the Cournot duopoly equilibrium


This section presents an analysis of the model with 2 firms and constant marginal cost.
= firm 1 price, = firm 2 price
= firm 1 quantity, = firm 2 quantity
= marginal cost, identical for both firms
Equilibrium prices will be:

This implies that firm 1’s profit is given by


• Calculate firm 1’s residual demand: Suppose firm 1 believes firm 2 is producing quantity . What is firm 1's
optimal quantity? Consider the diagram 1. If firm 1 decides not to produce anything, then price is given by
. If firm 1 produces then price is given by . More generally, for each
quantity that firm 1 might decide to set, price is given by the curve . The curve is called firm 1’s
residual demand; it gives all possible combinations of firm 1’s quantity and price for a given value of .
Cournot competition 188

• Determine firm 1’s optimum output: To do this we must find where marginal revenue equals marginal cost.
Marginal cost (c) is assumed to be constant. Marginal revenue is a curve - - with twice the slope of
and with the same vertical intercept. The point at which the two curves ( and ) intersect
corresponds to quantity . Firm 1’s optimum , depends on what it believes firm 2 is doing. To find
an equilibrium, we derive firm 1’s optimum for other possible values of . Diagram 2 considers two possible
values of . If , then the first firm's residual demand is effectively the market demand, .
The optimal solution is for firm 1 to choose the monopoly quantity; ( is monopoly quantity). If
firm 2 were to choose the quantity corresponding to perfect competition, such that , then
firm 1’s optimum would be to produce nil: . This is the point at which marginal cost intercepts the
marginal revenue corresponding to .
Cournot competition 189

• It can be shown that, given the linear demand and constant marginal cost, the function is also linear.
Because we have two points, we can draw the entire function , see diagram 3. Note the axis of the graphs
has changed, The function is firm 1’s reaction function, it gives firm 1’s optimal choice for each possible
choice by firm 2. In other words, it gives firm 1’s choice given what it believes firm 2 is doing.
Cournot competition 190

• The last stage in finding the Cournot equilibrium is to find firm 2’s reaction function. In this case it is symmetrical
to firm 1’s as they have the same cost function. The equilibrium is the intersection point of the reaction curves.
See diagram 4.

• The prediction of the model is that the firms will choose Nash equilibrium output levels.

Calculating the equilibrium


In very general terms, let the price function for the (duopoly) industry be and firm i have the cost
structure . To calculate the Nash equilibrium, the best response functions of the firms must first be
calculated.
The profit of firm i is revenue minus cost. Revenue is the product of price and quantity and cost is given by the firm's
cost function, so profit is (as described above): . The best response is to find the
value of that maximises given , with , i.e. given some output of the opponent firm, the output
that maximises profit is found. Hence, the maximum of with respect to is to be found. First take the
derivative of with respect to :

Setting this to zero for maximization:

The values of that satisfy this equation are the best responses. The Nash equilibria are where both and are
best responses given those values of and .
Cournot competition 191

An example
Suppose the industry has the following price structure: The profit of firm i (with

cost structure such that and for ease of computation) is:

The maximization problem resolves to (from the general case):

Without loss of generality, consider firm 1's problem:

By symmetry:

These are the firms' best response functions. For any value of , firm 1 responds best with any value of that
satisfies the above. In Nash equilibria, both firms will be playing best responses so solving the above equations
simultaneously. Substituting for in firm 1's best response:

The symmetric Nash equilibrium is at . (See Holt (2005, Chapter 13) for asymmetric examples.) Making
suitable assumptions for the partial derivatives (for example, assuming each firm's cost is a linear function of
quantity and thus using the slope of that function in the calculation), the equilibrium quantities can be substituted in
the assumed industry price structure to obtain the equilibrium market price.
Cournot competition 192

Cournot competition with many firms and the Cournot Theorem


For an arbitrary number of firms, N>1, the quantities and price can be derived in a manner analogous to that given
above. With linear demand and identical, constant marginal cost the equilibrium values are as follows:
edit; we should specify the constants. Given the following results are these;
Market Demand;
Cost Function; , for all i

, which is each individual firm's output

, which is total industry output

, which is the market clearing price

and

, which is each individual firm's profit.

The Cournot Theorem then states that, in absence of fixed costs of production, as the number of firms in the market,
N, goes to infinity, market output, Nq, goes to the competitive level and the price converges to marginal cost.

Hence with many firms a Cournot market approximates a perfectly competitive market. This result can be
generalized to the case of firms with different cost structures (under appropriate restrictions) and non-linear demand.
When the market is characterized by fixed costs of production, however, we can endogenize the number of
competitors imagining that firms enter in the market until their profits are zero. In our linear example with firms,
when fixed costs for each firm are , we have the endogenous number of firms:

and a production for each firm equal to:

This equilibrium is usually known as Cournot equilibrium with endogenous entry, or Marshall equilibrium [2] .

Implications
• Output is greater with Cournot duopoly than monopoly, but lower than perfect competition.
• Price is lower with Cournot duopoly than monopoly, but not as low as with perfect competition.
• According to this model the firms have an incentive to form a cartel, effectively turning the Cournot model into a
Monopoly. Cartels are usually illegal, so firms might instead tacitly collude using self-imposing strategies to
reduce output which, ceteris paribus will raise the price and thus increase profits for all firms involved.
Cournot competition 193

Bertrand versus Cournot


Although both models have similar assumptions, they have very different implications:
• Since the Bertrand model assumes that firms compete on price and not output quantity, it predicts that a duopoly
is enough to push prices down to marginal cost level, meaning that a duopoly will result in perfect competition.
• Neither model is necessarily "better." The accuracy of the predictions of each model will vary from industry to
industry, depending on the closeness of each model to the industry situation.
• If capacity and output can be easily changed, Bertrand is a better model of duopoly competition. If output and
capacity are difficult to adjust, then Cournot is generally a better model.
• Under some conditions the Cournot model can be recast as a two stage model, where in the first stage firms
choose capacities, and in the second they compete in Bertrand fashion.
However, when number of firms goes to infinity, Cournot model gives the same result as in Bertrand model: market
price is pushed to marginal cost level.

References
• Holt, Charles. "Games and Strategic Behavior (PDF version)", http://people.virginia.edu/~cah2k/expbooknsf.
pdf
• Tirole, Jean. "The Theory of Industrial Organization", MIT Press, 1988.
[1] Varian, Hal R. (2006), Intermediate microeconomics: a modern approach (7 ed.), W. W. Norton & Company, p. 490, ISBN 0393927024
[2] Etro, Federico. Simple models of competition (http:/ / dipeco. economia. unimib. it/ persone/ etro/ economia_e_politica_della_concorrenza/
notes. pdf), page 6, Dept. Political Economics -- Università di Milano-Bicocca, November 2006

Deadlock
C D
c 1, 1 0, 3
d 3, 0 2, 2

In game theory, Deadlock is a game where the action that is mutually most beneficial is also dominant. (An example
payoff matrix for Deadlock is pictured to the right.) This provides a contrast to the Prisoner's Dilemma where the
mutually most beneficial action is dominated. This makes Deadlock of rather less interest, since there is no conflict
between self-interest and mutual benefit. The game provides some interest, however, since one has some motivation
to encourage one's opponent to play a dominated strategy.

General definition
Deadlock 194

C D
c a, b c, d
d e, f g, h

Any game that satisfies the following two conditions constitutes a Deadlock game: (1) e>g>a>c and (2) d>h>b>f.
These conditions require that d and D be dominant. (d, D) be of mutual benefit, and that one prefer one's opponent
play c rather than d.
Like the Prisoner's Dilemma, this game has one unique Nash equilibrium: (d, D).

References
• GameTheory.net [1]

References
[1] http:/ / www. gametheory. net/ dictionary/ Games/ Deadlock. html

Unscrupulous diner's dilemma


In game theory, the Unscrupulous diner's dilemma (or just Diner's dilemma) is an n-player prisoner's dilemma.
The situation imagined is that several individuals go out to eat, and prior to ordering they agree to split the check
equally between all of them. Each individual must now choose whether to order the expensive or inexpensive dish. It
is presupposed that the expensive dish is better than the cheaper, but not by enough to warrant paying the difference
compared to eating alone. Each individual reasons that the expense they add to their bill by ordering the more
expensive item is very small, and thus the improved dining experience is worth the money. However, every
individual reasons this way and they all end up paying for the cost of the more expensive meal, which, by
assumption, is worse for everyone than ordering and paying for the cheaper meal.

Formal definition and equilibrium analysis


Let g represent the joy of eating the expensive meal, b the joy of eating the cheap meal, h is the cost of the expensive
meal, l the cost of the cheap meal, and n the number of players. From the description above we have the following
ordering . Also, in order to make the game sufficiently similar to the Prisoner's dilemma we
presume that one would prefer to order the expensive meal given others will help defray the cost,

Consider an arbitrary set of strategies by a player's opponent. Let the total cost of the other player's meals be x. The
cost of ordering the cheap meal is and the cost of ordering the expensive meal is . So the

utilities for each meal are for the expensive meal and for the cheaper meal. By

assumption, the utility of ordering the expensive meal is higher. Remember that the choice of opponents' strategies
was arbitrary and that the situation is symmetric. This proves that the expensive meal is strictly dominant and thus
the unique Nash equilibrium.
If everyone orders the expensive meal all of the diners pay h and their total utility is . On the other hand
suppose that all the individuals had ordered the cheap meal, their utility would have been . This
demonstrates the similarity between the Diner's dilemma and the Prisoner's dilemma. Like the Prisoner's dilemma,
Unscrupulous diner's dilemma 195

everyone is worse off by playing the unique equilibrium than they would have been if they collectively pursued
another strategy.

Experimental evidence
Gneezy, Haruvy, and Yafe (2004) tested these results in a field experiment. Groups of six diners faced different
billing arrangements. As predicted, subjects consume more when the bill is split equally than when they have to pay
individually. Consumption is highest when the meal is free. Finally, members of some groups had to pay only one
sixth of their individual costs. There was no difference between the amount consumed by these groups and those
splitting the total cost of the meal equally. As the private cost of increased consumption is the same for both
treatments but splitting the cost imposes a burden on other group members, this indicates that participants did not
take the welfare of others into account when making their choices. This contrasts to a large number of laboratory
experiments where subjects face analytically similar choices but the context is more abstract.

References
• Glance, Huberman (1994). "The dynamics of social dilemmas" [1]. Scientific American.
• Gneezy, U.; Haruvy, E.; Yafe, H. (2004). "The inefficiency of splitting the bill". The Economic Journal 114
(495): 265–280. doi:10.1111/j.1468-0297.2004.00209.x.

References
[1] http:/ / www. sciamdigital. com/ index. cfm?fa=Products. ViewIssuePreview&
ARTICLEID_CHAR=F76F506E-1A94-4FC6-A44B-C0F31E0F091

Guess 2/3 of the average


In game theory, Guess 2/3 of the average is a game where several people guess what 2/3 of the average of their
guesses will be, and where the numbers are restricted to the real numbers between 0 and 100, inclusive. The winner
is the one closest to the 2/3 average.

Equilibrium analysis
In this game there is no strictly dominant strategy. However, there is a unique pure strategy Nash equilibrium. This
equilibrium can be found by iterated elimination of weakly dominated strategies. Guessing any number that lies
above 66.67 is dominated for every player since it cannot possibly be 2/3 of the average of any guess. These can be
eliminated. Once these strategies are eliminated for every player, any guess above 44.45 is weakly dominated for
every player since no player will guess above 66.67 and 2/3 of 66.67 is approximately 44.45. This process will
continue until all numbers above 0 have been eliminated.
This degeneration does not occur in quite the same way if choices are restricted to, for example, the integers between
0 and 100. In this case, all integers except 0 and 1 vanish; it becomes advantageous to select 0 if one expects that at
least 1/4 of all players will do so, and 1 if otherwise. (In this way, it is a lopsided version of the so-called "consensus
game", where one wins by being in the majority.)
Guess 2/3 of the average 196

Experimental results
This game is a common demonstration in game theory classes, where even economics graduate students fail to guess
0.[1] When performed among ordinary people it is usually found that the winner guess is much higher than 0, e.g.,
21.6 was the winning value in a large internet-based competition organized by the Danish newspaper Politiken. This
included 19,196 people and with a prize of 5000 Danish kroner.[2]
Creativity Games has an on-line version of the game [3] where you play against the last 100 visitors.
[4]
The Museum of Money has an interactive flash applet of the game , where each given answer will be used to
calculate the current outcome.

Rationality versus common knowledge of rationality


This game illustrates the difference between perfect rationality of an actor and the common knowledge of rationality
of all players. Even perfectly rational players playing in such a game should not guess 0 unless they know that the
other players are rational as well and that all players' rationality is common knowledge. If a rational player
reasonably believes that other players will not follow the chain of elimination described above, it would be rational
for him/her to guess a number above 0.
Interestingly, we can suppose that all the players are rational, but they do not have common knowledge of each
other's rationality. Even in this case, it is not required that every player guess 0, since they may expect each other to
behave irrationally.

Notes
[1] Nagel, Rosemarie (1995). "Unraveling in Guessing Games: An Experimental Study". American Economic Review 85 (5): 1313–1326.
JSTOR 2950991.
[2] (Danish) Astrid Schou, Gæt-et-tal konkurrence afslører at vi er irrationelle (http:/ / politiken. dk/ erhverv/ article123939. ece), Politiken;
includes a histogram (http:/ / konkurrence. econ. ku. dk/ distribution?id=1237& d=6655488e6252d35e705500b68a339c50) of the guesses.
Note that some of the players guessed close to 100. A large number of players guessed 33.3 (i.e. 2/3 of 50), indicating an assumption that
players would guess randomly. A smaller but significant number of players guessed 22.2 (i.e. 2/3 of 33.3), indicating a second iteration of this
theory based on an assumption that players would guess 33.3. The final number of 21.6 was slightly below this peak, implying that on average
each player iterated their assumption 1.07 times.
[3] http:/ / twothirdsofaverage. creativitygames. net
[4] http:/ / museumofmoney. org/ exhibitions/ games/ guessnumber. htm
Kuhn poker 197

Kuhn poker
Kuhn poker is a simplified form of poker developed by Dr. Harold W. Kuhn. It is a zero sum two player game. The
deck includes only three playing cards, for example a King, Queen, and Jack. One card is dealt to each player, then
the first player must bet or pass, then the second player may bet or pass. If any player chooses to bet the opposing
player must bet as well ("call") in order to stay in the round. After both players pass or bet the player with the highest
card wins the pot. Kuhn demonstrated that there are many game theoretic optimal strategies for the first player in this
game, but only one for the second player, and that, when played optimally, the first player should expect to lose at a
rate of −1/18 per hand.
In more conventional poker terms:
• Each player antes 1
• Each player is dealt one of the three cards, and the third is put aside unseen
• Player One can check or raise 1
• If Player One checks then Player Two can check or raise 1
• If Player Two checks there is a showdown for the pot of 2
• If Player Two raises then Player One can fold or call
• If Player One folds then Player Two takes the pot of 3
• If Player One calls there is a showdown for the pot of 4
• If Player One raises then Player Two can fold or call
• If Player Two folds then Player One takes the pot of 3
• If Player Two calls there is a showdown for the pot of 4

References
• Kuhn, H. W. (1950). "Simplified Two-Person Poker". In Kuhn, H. W.; Tucker, A. W.. Contributions to the
Theory of Games. 1. Princeton University Press. pp. 97–103.

External links
• Effective Short-Term Opponent Exploitation in Simplified Poker [1]

References
[1] http:/ / www. cs. ualberta. ca/ ~holte/ Publications/ aaai2005poker. pdf
Nash bargaining game 198

Nash bargaining game


The two person bargaining problem is a problem of understanding how two agents should cooperate when
non-cooperation leads to Pareto-inefficient results. It is in essence an equilibrium selection problem; Many games
have multiple equilibria with varying payoffs for each player, forcing the players to negotiate on which equilibrium
to target. The quintessential example of such a game is the Ultimatum game. The underlying assumption of
bargaining theory is that the resulting solution should be the same solution an impartial arbitrator would recommend.
Solutions to bargaining come in two flavors: an axiomatic approach where desired properties of a solution are
satisfied and a strategic approach where the bargaining procedure is modeled in detail as a sequential game.

The bargaining game


The bargaining game or Nash bargaining game is a simple two-player game used to model bargaining
interactions. In the Nash Bargaining Game two players demand a portion of some good (usually some amount of
money). If the total amount requested by the players is less than that available, both players get their request. If their
total request is greater than that available, neither player gets their request. A Nash bargaining solution is a (Pareto
efficient) solution to a Nash bargaining game. According to Walker (2005), Nash's bargaining solution was shown
by John Harsanyi to be the same as Zeuthen's solution of the bargaining problem (Problems of Monopoly and
Economic Warfare, 1930).

An example
Opera Football

Opera 3,2 0,0

Football 0,0 2,3

Battle of the Sexes 1

The Battle of the Sexes, as shown, is a two player coordination game. Both Opera/Opera and Football/Football are
Nash equilibria. Any probability distribution over these two Nash equilibria is a correlated equilibrium. The question
then becomes which of the infinitely many possible equilibria should be chosen by the two players. If they disagree
and choose different distributions, they are likely receive 0 payoffs. In this symmetric case the natural choice is to
play Opera/Opera and Football/Football with equal probability. Indeed all bargaining solutions described below
prescribe this solution. However, if the game is asymmetric --- for example, Football/Football instead yields payoffs
of 2,5 --- the appropriate distribution is less clear. The problem of finding such a distribution is addressed by the
bargaining theory.

Formal description
A two person bargain problem consists of a disagreement, or threat, point , where and are the
respective payoffs to players 1 and player 2, and a feasibility set , a closed convex subset of , the elements of
which are interpreted as agreements. Set is convex because an agreement could take the form of a correlated
combination of other agreements. The problem is nontrivial if agreements in are better for both parties than the
disagreement. The goal of bargaining is to choose the feasible agreement in that could result from
negotiations.
Nash bargaining game 199

Feasibility set
Which agreements are feasible depends on whether bargaining is mediated by an additional party. When binding
contracts are allowed, any joint action is playable, and the feasibility set consists of all attainable payoffs better than
the disagreement point. When binding contracts are unavailable, the players can defect (moral hazard), and the
feasibility set is composed of correlated equilibria, since these outcomes require no exogenous enforcement.

Disagreement point
The disagreement point is the value the players can expect to receive if negotiations break down. This could be
some focal equilibrium that both players could expect to play. This point directly affects the bargaining solution,
however, so it stands to reason that each player should attempt to choose his disagreement point in order to
maximize his bargaining position. Towards this objective, it is often advantageous to increase one's own
disagreement payoff while harming the opponent's disagreement payoff (hence the intrepretation of the disagreement
as a threat). If threats are viewed as actions, then one can construct a separate game wherein each player chooses a
threat and receives a payoff according to the outcome of bargaining. It is known as Nash's variable threat game.
Alternatively, each player could play a minimax strategy in case of disagreement, choosing to disregard personal
reward in order to hurt the opponent as much as possible shoud the opponent leave the bargaining table.

Equilibrium analysis
Strategies are represented in the Nash bargaining game by a pair (x, y). x and y are selected from the interval [d, z],
where z is the total good. If x + y is equal to or less than z, the first player receives x and the second y. Otherwise
both get d. d here represents the disagreement point or the threat of the game; often .
There are many Nash equilibria in the Nash bargaining game. Any x and y such that x + y = z is a Nash equilibrium.
If either player increases their demand, both players receive nothing. If either reduces their demand they will receive
less than if they had demanded x or y. There is also a Nash equilibrium where both players demand the entire good.
Here both players receive nothing, but neither player can increase their return by unilaterally changing their strategy.

Bargaining solutions
Various solutions have been proposed based on slightly different assumptions about what properties are desired for
the final agreement point.

Nash bargaining solution


John Nash proposed that a solution should satisfy certain axioms:
1. Invariant to affine transformations or Invariant to equivalent utility representations
2. Pareto optimality
3. Independence of irrelevant alternatives
4. Symmetry
Let u and v be the utility functions of Player 1 and Player 2, respectively. In the Nash bargaining solution, the
players will seek to maximize , where and , are the status quo
utilities (i.e. the utility obtained if one decides not to bargain with the other player). The product of the two excess
utilities is generally referred to as the Nash product.
Nash bargaining game 200

Kalai-Smorodinsky bargaining solution


Independence of Irrelevant Alternatives can be substituted with a monotonicity condition, as demonstrated by Ehud
Kalai and Meir Smorodinsky. It is the point which maintains the ratios of maximal gains. In other words, if player 1
could receive a maximum of with player 2’s help (and vice-versa for ), then the Kalai-Smorodinsky
bargaining solution would yield the point on the Pareto frontier such that .

Egalitarian bargaining solution


The egalitarian bargaining solution, introduced by Ehud Kalai, is a third solution which drops the condition of scale
invariance while including both the axiom of Independence of irrelevant alternatives, and the axiom of monotonicity.
It is the solution which attempts to grant equal gain to both parties. In other words, it is the point which maximizes
the minimum payoff among players.

Applications
Some philosophers and economists have recently used the Nash bargaining game to explain the emergence of human
attitudes toward distributive justice (Alexander 2000; Alexander and Skyrms 1999; Binmore 1998, 2005). These
authors primarily use evolutionary game theory to explain how individuals come to believe that proposing a 50-50
split is the only just solution to the Nash Bargaining Game.

References
• Alexander, Jason McKenzie (2000). "Evolutionary Explanations of Distributive Justice". Philosophy of Science
67 (3): 490–516. JSTOR 188629.
• Alexander, Jason; Skyrms, Brian (1999). "Bargaining with Neighbors: Is Justice Contagious". Journal of
Philosophy 96 (11): 588–598. JSTOR 2564625.
• Binmore, K.; Rubinstein, A.; Wolinsky, A. (1986). "The Nash Bargaining Solution in Economic Modelling".
RAND Journal of Economics 17: 176–188. JSTOR 2555382.
• Binmore, Kenneth (1998). Game Theory and The Social Contract Volume 2: Just Playing. Cambridge: MIT
Press. ISBN 0262024446.
• Binmore, Kenneth (2005). Natural Justice. New York: Oxford University Press. ISBN 0195178114.
• Kalai, Ehud (1977). "Proportional solutions to bargaining situations: Intertemporal utility comparisons".
Econometrica 45 (7): 1623–1630. JSTOR 1913954.
• Kalai, Ehud & Smorodinsky, Meir (1975). "Other solutions to Nash’s bargaining problem". Econometrica 43 (3):
513–518. JSTOR 1914280.
• Nash, John (1950). "The Bargaining Problem". Econometrica 18 (2): 155–162. JSTOR 1907266.
• Walker, Paul (2005). "History of Game Theory" [1].

External links
• Nash Bargaining Solutions [2]
Screening game 201

Screening game
A screening game is a two-player principal–agent type game used in economic and game theoretical modeling.
Principal–agent problems are situations where there are two players whose interests are not necessarily at ends, but
where complete honesty is not optimal for one player. This will lead to strategies where the players exchange
information based in their actions which is to some degree noisy. This ambiguity prevents the other player from
taking advantage of the first. The game is closely related to signaling games, but there is a difference in how
information is exchanged. In the principal-agent model, for instance, there is an employer (the principal) and a
worker (the agent). The worker has a given skill level, and chooses the amount of effort he will exert. If the worker
knows his ability (which is given at the outset, perhaps by nature), and can acquire credentials or somehow signal
that ability to the employer before being offered a wage, then the problem is signaling. What sets apart a screening
game is that the employer offers a wage level first, at which point the worker chooses the amount of credentials he
will acquire (perhaps in the form of education or skills) and accepts or rejects a contract for a wage level. It is called
screening, because the worker is screened by the employer in that the offers may be contingent on the skill level of
the worker.
Some economists use the terms signaling and screening interchangeably, and the distinction can be attributed to
Stiglitz and Weiss (1989).

References
• Stiglitz, Joseph & Andrew Weiss (1989) “Sorting out the Differences Between Screening and Signalling Models,”
in Papers in Commemoration of the Economic Theory Seminar at Oxford University, edited by Michael
Dempster, Oxford: Oxford University Press.

Princess and monster game


In game theory, the princess and monster game is a pursuit-evasion game played by two players in a region. The
game was devised by Rufus Isaacs and published in his book Differential Games (1965) as follows. "The monster
searches for the princess, the time required being the payoff. They are both in a totally dark room (of any shape), but
they are each cognizant of its boundary. Capture means that the distance between the princess and the monster is
within the capture radius, which is assumed to be small in comparison with the dimension of the room. The monster,
supposed highly intelligent, moves at a known speed. We permit the princess full freedom of locomotion."[1]
This game remained a well known open problem until it was solved by Shmuel Gal in the late 1970s.[2] [3] His
optimal strategy for the princess is especially interesting. Go to a random location in the room. Stay still for a time
interval which is not too short but not too long, go to another (independent) random location and repeat the
procedure.[3] [4] [5] His proposed optimal search strategy is based on subdividing the room into many narrow
rectangles, picking a rectangle at random and searching it in some specific way. After some time picking another
rectangle randomly and independently, etc. The exact details of the search and evasion strategies are given in the
references.
Princess and monster games can be played on a pre-selected graph. (A possible simple graph is the circle, suggested
by Isaacs as a stepping stone for the game in the region.) It can be demonstrated that for any finite graph an optimal
mixed search strategy exists that results in a finite payoff. This game has been solved only for the very simple graph
consisting of a single loop (a circle).[6] The value of the game on the unit interval (a graph with two nodes with a link
in-between) has been estimated approximatively. This game looks simple but is quite complicated. Surprisingly, the
'obvious' search strategy of starting at one end (chosen at random) and 'sweeping' as fast as possible the whole
interval is not optimal. This strategy guarantees 0.75 expected capture time. However, by utilising a more
Princess and monster game 202

sophisticated mixed searcher and hider strategy, one can reduce the expected capture time by about 8.6%. Actually,
this number would be quite close to the value of the game if someone was able to proof the optimality of the related
strategy of the Princess. [7] [8]

References
[1] R. Isaacs, Differential Games: A Mathematical Theory with Applications to Warfare and Pursuit, Control and Optimization, John Wiley &
Sons, New York (1965), PP 349–350.
[2] S. Gal, SEARCH GAMES, Academic Press, New York (1980).
[3] Gal Shmuel (1979). "Search games with mobile and immobile hider". SIAM J. Control Optim. 17 (1): 99–122. doi:10.1137/0317009.
MR0516859.
[4] A. Garnaev (1992). "A Remark on the Princess and Monster Search Game" (http:/ / www. apmath. spbu. ru/ ~kmms/ garnaev/ html/
Downloads/ 1992bGT. pdf). Int. J. Game Theory 20 (3): 269–276. doi:10.1007/BF01253781. .
[5] M. Chrobak (2004). "A princess swimming in the fog looking for a monster cow". ACM SIGACT News 35 (2): 74–78.
doi:10.1145/992287.992304.
[6] S. Alpern (1973). "The search game with mobile hiders on the circle". Proceedings of the Conference on Differential Games and Control
Theory.
[7] S. Alpern, R. Fokkink, R. Lindelauf, and G. J. Olsder. Numerical Approaches to the 'Princess and Monster' Game on the Interval. (http:/ /
www. cdam. lse. ac. uk/ Reports/ Files/ cdam-2006-18. pdf) SIAM J. control and optimization 2008.
[8] L. Geupel. The 'Princess and Monster' Game on an Interval. (http:/ / hempelz. de/ lenny/ Leonhard Geupel - Bachelor's Thesis - The 'Princess
and Monster' Game on an Interval. pdf)
203

Theorems

Minimax
Minimax (sometimes minmax) is a decision rule used in decision theory, game theory, statistics and philosophy for
minimizing the possible loss for a worst case (maximum loss) scenario. Alternatively, it can be thought of as
maximizing the minimum gain (maximin). Originally formulated for two-player zero-sum game theory, covering
both the cases where players take alternate moves and those where they make simultaneous moves, it has also been
extended to more complex games and to general decision making in the presence of uncertainty.

Game theory
In the theory of simultaneous games, a minimax strategy is a mixed strategy which is part of the solution to a
zero-sum game. In zero-sum games, the minimax solution is the same as the Nash equilibrium.

Minimax theorem
The minimax theorem states[1] :22:
For every two-person, zero-sum game with finitely many strategies, there exists a value V and a mixed
strategy for each player, such that (a) Given player 2's strategy, the best payoff possible for player 1 is
V, and (b) Given player 1's strategy, the best payoff possible for player 2 is −V.
Equivalently, Player 1's strategy guarantees him a payoff of V regardless of Player 2's strategy, and similarly Player
2 can guarantee himself a payoff of −V. The name minimax arises because each player minimizes the maximum
payoff possible for the other—since the game is zero-sum, he also maximizes his own minimum payoff.
This theorem was established by John von Neumann,[2] who is quoted as saying "As far as I can see, there could be
no theory of games … without that theorem … I thought there was nothing worth publishing until the Minimax
Theorem was proved".[3]
See Sion's minimax theorem and Parthasarathy's theorem for generalizations; see also example of a game without a
value.

Example

B chooses B1 B chooses B2 B chooses B3

A chooses A1 +3 −2 +2

A chooses A2 −1 0 +4

A chooses A3 −4 −3 +1

The following example of a zero-sum game, where A and B make simultaneous moves, illustrates minimax
solutions. Suppose each player has three choices and consider the payoff matrix for A displayed at right. Assume the
payoff matrix for B is the same matrix with the signs reversed (i.e. if the choices are A1 and B1 then B pays 3 to A).
Then, the minimax choice for A is A2 since the worst possible result is then having to pay 1, while the simple
minimax choice for B is B2 since the worst possible result is then no payment. However, this solution is not stable,
since if B believes A will choose A2 then B will choose B1 to gain 1; then if A believes B will choose B1 then A
will choose A1 to gain 3; and then B will choose B2; and eventually both players will realize the difficulty of making
Minimax 204

a choice. So a more stable strategy is needed.


Some choices are dominated by others and can be eliminated: A will not choose A3 since either A1 or A2 will
produce a better result, no matter what B chooses; B will not choose B3 since some mixtures of B1 and B2 will
produce a better result, no matter what A chooses.
A can avoid having to make an expected payment of more than 1/3 by choosing A1 with probability 1/6 and A2 with
probability 5/6, no matter what B chooses. B can ensure an expected gain of at least 1/3 by using a randomized
strategy of choosing B1 with probability 1/3 and B2 with probability 2/3, no matter what A chooses. These mixed
minimax strategies are now stable and cannot be improved.

Maximin
Frequently, in game theory, maximin is distinct from minimax. Minimax is used in zero-sum games to denote
minimizing the opponent's maximum payoff. In a zero-sum game, this is identical to minimizing one's own
maximum loss, and to maximizing one's own minimum gain.
"Maximin" is a term commonly used for non-zero-sum games to describe the strategy which maximizes one's own
minimum payoff. In non-zero-sum games, this is not generally the same as minimizing the opponent's maximum
gain, nor the same as the Nash equilibrium strategy.

Combinatorial game theory


In combinatorial game theory, there is a minimax algorithm for game solutions.
A simple version of the minimax algorithm, stated below, deals with games such as tic-tac-toe, where each player
can win, lose, or draw. If player A can win in one move, his best move is that winning move. If player B knows that
one move will lead to the situation where player A can win in one move, while another move will lead to the
situation where player A can, at best, draw, then player B's best move is the one leading to a draw. Late in the game,
it's easy to see what the "best" move is. The Minimax algorithm helps find the best move, by working backwards
from the end of the game. At each step it assumes that player A is trying to maximize the chances of A winning,
while on the next turn player B is trying to minimize the chances of A winning (i.e., to maximize B's own chances of
winning).

Minimax algorithm with alternate moves


A minimax algorithm[4] is a recursive algorithm for choosing the next move in an n-player game, usually a
two-player game. A value is associated with each position or state of the game. This value is computed by means of
a position evaluation function and it indicates how good it would be for a player to reach that position. The player
then makes the move that maximizes the minimum value of the position resulting from the opponent's possible
following moves. If it is A's turn to move, A gives a value to each of his legal moves.
A possible allocation method consists in assigning a certain win for A as +1 and for B as −1. This leads to
combinatorial game theory as developed by John Horton Conway. An alternative is using a rule that if the result of a
move is an immediate win for A it is assigned positive infinity and, if it is an immediate win for B, negative infinity.
The value to A of any other move is the minimum of the values resulting from each of B's possible replies. For this
reason, A is called the maximizing player and B is called the minimizing player, hence the name minimax algorithm.
The above algorithm will assign a value of positive or negative infinity to any position since the value of every
position will be the value of some final winning or losing position. Often this is generally only possible at the very
end of complicated games such as chess or go, since it is not computationally feasible to look ahead as far as the
completion of the game, except towards the end, and instead positions are given finite values as estimates of the
degree of belief that they will lead to a win for one player or another.
Minimax 205

This can be extended if we can supply a heuristic evaluation function which gives values to non-final game states
without considering all possible following complete sequences. We can then limit the minimax algorithm to look
only at a certain number of moves ahead. This number is called the "look-ahead", measured in "plies". For example,
the chess computer Deep Blue (that beat Garry Kasparov) looked ahead at least 12 plies, then applied a heuristic
evaluation function.
The algorithm can be thought of as exploring the nodes of a game tree. The effective branching factor of the tree is
the average number of children of each node (i.e., the average number of legal moves in a position). The number of
nodes to be explored usually increases exponentially with the number of plies (it is less than exponential if
evaluating forced moves or repeated positions). The number of nodes to be explored for the analysis of a game is
therefore approximately the branching factor raised to the power of the number of plies. It is therefore impractical to
completely analyze games such as chess using the minimax algorithm.
The performance of the naïve minimax algorithm may be improved dramatically, without affecting the result, by the
use of alpha-beta pruning. Other heuristic pruning methods can also be used, but not all of them are guaranteed to
give the same result as the un-pruned search.
A naïve minimax algorithm may be trivially modified to additionally return an entire Principal Variation along with
a minimax score.

Lua example
function minimax(node,depth)
if depth <= 0 then
-- positive values are good for the maximizing player
-- negative values are good for the minimizing player
return objective_value(node)
end
-- maximizing player is (+1)
-- minimizing player is (-1)
local alpha = -node.player * INFINITY

local child = next_child(node,nil)


while child = ~nil do
local score = minimax(child,depth-1)
alpha = node.player==1 and math.max(alpha,score) or
math.min(alpha,score)
child = next_child(node,child)
end

return alpha
end
Minimax 206

Pseudocode
Pseudocode for the Negamax version of the minimax algorithm (using an evaluation heuristic to terminate at a given
depth) is given below.
The code is based on the observation that . This avoids the need for the algorithm
to treat the two players separately.
function integer minimax(node, depth)
if node is a terminal node or depth <= 0:
return the heuristic value of node
α = -∞
for child in node: # evaluation is identical for both players
α = max(α, -minimax(child, depth-1))
return α

Example
Suppose the game being played only has a
maximum of two possible moves per player
each turn. The algorithm generates the tree
on the right, where the circles represent the
moves of the player running the algorithm
(maximizing player), and squares represent
the moves of the opponent (minimizing
player). Because of the limitation of
computation resources, as explained above,
the tree is limited to a look-ahead of 4
moves.

The algorithm evaluates each leaf node


using a heuristic evaluation function, obtaining the values shown. The moves where the maximizing player wins are
assigned with positive infinity, while the moves that lead to a win of the minimizing player are assigned with
negative infinity. At level 3, the algorithm will choose, for each node, the smallest of the child node values, and
assign it to that same node (e.g. the node on the left will choose the minimum between "10" and "+∞", therefore
assigning the value "10" to itself). The next step, in level 2, consists of choosing for each node the largest of the
child node values. Once again, the values are assigned to each parent node. The algorithm continues evaluating the
maximum and minimum values of the child nodes alternately until it reaches the root node, where it chooses the
move with the largest value (represented in the figure with a blue arrow). This is the move that the player should
make in order to minimize the maximum possible loss.

Minimax for individual decisions

Minimax in the face of uncertainty


Minimax theory has been extended to decisions where there is no other player, but where the consequences of
decisions depend on unknown facts. For example, deciding to prospect for minerals entails a cost which will be
wasted if the minerals are not present, but will bring major rewards if they are. One approach is to treat this as a
game against nature (see move by nature), and using a similar mindset as Murphy's law, take an approach which
minimizes the maximum expected loss, using the same techniques as in the two-person zero-sum games.
Minimax 207

In addition, expectiminimax trees have been developed, for two-player games in which chance (for example, dice) is
a factor.

Minimax criterion in statistical decision theory


In classical statistical decision theory, we have an estimator that is used to estimate a parameter . We also
assume a risk function , usually specified as the integral of a loss function. In this framework, is called
minimax if it satisfies

An alternative criterion in the decision theoretic framework is the Bayes estimator in the presence of a prior
distribution . An estimator is Bayes if it minimizes the average risk

Non-probabilistic decision theory


A key feature of minimax decision making is being non-probabilistic: in contrast to decisions using expected value
or expected utility, it makes no assumptions about the probabilities of various outcomes, just scenario analysis of
what the possible outcomes are. It is thus robust to changes in the assumptions, as these other decision techniques are
not. Various extensions of this non-probabilistic approach exist, notably minimax regret and Info-gap decision
theory.
Further, minimax only requires ordinal measurement (that outcomes be compared and ranked), not interval
measurements (that outcomes include "how much better or worse"), and returns ordinal data, using only the modeled
outcomes: the conclusion of a minimax analysis is: "this strategy is minimax, as the worst case is (outcome), which
is less bad than any other strategy". Compare to expected value analysis, whose conclusion is of the form: "this
strategy yields E(X)=n." Minimax thus can be used on ordinal data, and can be more transparent.

Maximin in philosophy
In philosophy, the term "maximin" is often used in the context of John Rawls's A Theory of Justice, where he refers
to it (Rawls (1971, p. 152)) in the context of The Difference Principle. Rawls defined this principle as the rule which
states that social and economic inequalities should be arranged so that "they are to be of the greatest benefit to the
least-advantaged members of society". In other words, an unequal distribution can be just when it maximizes the
mininum benefit to those who have the lowest allocation of welfare-conferring resources (which he refers to as
"primary goods").[5] [6]

Notes
[1] Osborne, Martin J., and Ariel Rubinstein. A Course in Game Theory. Cambridge, MA: MIT, 1994. Print.
[2] Von Neumann, J: Zur Theorie der Gesellschaftsspiele Math. Annalen. 100 (1928) 295-320
[3] John L Casti (1996). Five golden rules: great theories of 20th-century mathematics – and why they matter (http:/ / worldcat. org/ isbn/
0-471-00261-5). New York: Wiley-Interscience. p. 19. ISBN 0-471-00261-5. .
[4] Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (http:/ / aima. cs. berkeley. edu/ ) (2nd ed.), Upper Saddle
River, New Jersey: Prentice Hall, pp. 163–171, ISBN 0-13-790395-2,
[5] Arrow, "Some Ordinalist-Utilitarian Notes on Rawls's Theory of Justice, Journal of Philosophy 70, 9 (May 1973), pp. 245-263.
[6] Harsanyi, "Can the Maximin Principle Serve as a Basis for Morality? a Critique of John Rawls's Theory, American Political Science Review
69, 2 (June 1975), pp. 594-606.
Minimax 208

External links
• A visualization applet (http://www.cut-the-knot.org/Curriculum/Games/MixedStrategies.shtml)
• "Maximin principle" from A Dictionary of Philosophical Terms and Names. (http://www.swif.uniba.it/lei/
foldop/foldoc.cgi?maximin+principle)
• Play a betting-and-bluffing game against a mixed minimax strategy (http://www.bewersdorff-online.de/quaak/
rules.htm)
• The Dictionary of Algorithms and Data Structures entry for minimax (http://www.nist.gov/dads/HTML/
minimax.html)
• Minimax (with or without alpha-beta pruning) algorithm visualization - game tree (Java Applet) (http://wolfey.
110mb.com/GameVisual/launch.php)
• CLISP minimax - game. (http://mmengineer.blogspot.com/2008/05/inteligencia-artificial-minimax-clisp.
html) (in Spanish)
• Maximin Strategy from Game Theory (http://franteractive.net/maximin.html)

Purification theorem
In game theory, the purification theorem was contributed by Nobel laureate John Harsanyi in 1973[1] . The theorem
aims to justify a puzzling aspect of mixed strategy Nash equilibria: that each player is wholly indifferent amongst
each of the actions he puts non-zero weight on, yet he mixes them so as to make every other player also indifferent.
The mixed strategy equilibria are explained as being the limit of pure strategy equilibria for a disturbed game of
incomplete information in which the payoffs of each player are known to themselves but not their opponents. The
idea is that the predicted mixed strategy of the original game emerge as ever improving approximations of a game
that is not observed by the theorist who designed the original, idealized game.
The apparently mixed nature of the strategy is actually just the result of each player playing a pure strategy with
threshold values that depend on the ex-ante distribution over the continuum of payoffs that a player can have. As that
continuum shrinks to zero, the players strategies converge to the predicted Nash equilibria of the original,
unperturbed, complete information game.
The result is also an important aspect of modern day inquiries in evolutionary game theory where the perturbed
values are interpreted as distributions over types of players randomly paired in a population to play games.

Technical Details
Harsanyi's proof involves the strong assumption that the perturbations for each player are independent of the other
players. However, further refinements to make the theorem more general have been attempted[2] [3] .
The main result of the theorem is that all the mixed strategy equilibria of a given game can be purified using the
same sequence of perturbed games. However, in addition to independence of the perturbations, it relies on the set of
payoffs for this sequence of games being of full measure. There are games, of a pathological nature, for which this
condition fails to hold.
The main problem with these games falls into one of two categories: (1) various mixed strategies of the game are
purified by different sequences of perturbed games and (2) some mixed strategies of the game involve weakly
dominated strategies. No mixed strategy involving a weakly dominated strategy can be purified using this method
because if there is ever any non-negative probability that the opponent will play a strategy for which the weakly
dominated strategy is not a best response, then one will never wish to play the weakly dominated strategy. Hence,
the limit fails to hold because it involves a discontinuity[4] .
Purification theorem 209

References
[1] J.C. Harsanyi. 1973. "Games with randomly disturbed payoffs: a new rationale for mixed-strategy equilibrium points. Int. J. Game Theory 2
(1973), pp. 1–23.
[2] R. Aumann, et al. 1983. "Approximate Purificaton of Mixed Strategies. Mathematics of Operations Research 8 (1983), pp. 327–341.
[3] Govindan, S., Reny, P.J. and Robson, A.J. 2003. "A Short Proof of Harsanyi's Purification Theorem. Games and Economic Behavior v45,n2
(2003), pp.369-374.
[4] Fudenberg, Drew and Jean Tirole: Game Theory, MIT Press, 1991, pp. 233-234
Folk theorem 210

Folk theorem
Folk theorem
A solution concept in game theory

Relationships

Subset of Minimax, Nash Equilibrium

Significance

Proposed by various, notably Ariel Rubinstein

Used for Infinitely repeated games

Example Repeated prisoner's dilemma

In game theory, folk theorems are a class of theorems which imply that in repeated games, any outcome is a feasible
solution concept, if under that outcome the players' minimax conditions are satisfied. The minimax condition states
that a player will minimize the maximum possible loss which he could face in the game. An outcome is said to be
feasible if it satisfies this condition for each player of the game. A repeated game is one in which there is not
necessarily a final move, but rather, there is a sequence of rounds, during which the player may gather information
and choose moves. An early published example is (Friedman 1971).
In mathematics, the term folk theorem refers generally to any theorem that is believed and discussed, but has not
been published. In order that the name of the theorem be more descriptive, Roger Myerson has recommended the
phrase general feasibility theorem in the place of folk theorem for describing theorems which are of this class.[1]

Sketch of proof
A commonly referenced proof of a folk theorem was published in (Rubinstein 1979).
The method for proving folk theorems is actually quite simple. A grim trigger strategy is a strategy which punishes
an opponent for any deviation from some certain behavior. So, all of the players of the game first must have a certain
feasible outcome in mind. Then the players need only adhere to an almost grim trigger strategy under which any
deviation from the strategy which will bring about the intended outcome is punished to a degree such that any gains
made by the deviator on account of the deviation are exactly cancelled out. Thus, there is no advantage to any player
for deviating from the course which will bring out the intended, and arbitrary, outcome, and the game will proceed in
exactly the manner to bring about that outcome.

Applications
It is possible to apply this class of theorems to a diverse number of fields. An application in anthropology, for
example, would be that in a community where all behavior is well known, and where members of the community
know that they will continue to have to deal with each other, then any pattern of behavior (traditions, taboos, etc)
may be sustained by social norms so long as the individuals of the community are better off remaining in the
community than they would be leaving the community (the minimax condition).
On the other hand, MIT economist Franklin Fisher has noted that the folk theorem is not a positive theory.[2] In
considering, for instance, oligopoly behavior, the folk theorem does not tell the economist what firms will do, but
rather that cost and demand functions are not sufficient for a general theory of oligopoly, and the economists must
include the context within which oligopolies operate in their theory.[2]
In 2007, Borgs et al. proved that, despite the folk theorem, in the general case computing the Nash equilibria for
repeated games is not easier than computing the Nash equilibria for one-shot finite games, a problem which lies in
Folk theorem 211

the PPAD complexity class.[3]

Notes
[1] Myerson, Roger B. Game Theory, Analysis of conflict, Cambridge, Harvard University Press (1991)
[2] Fisher, Franklin M. Games Economists Play: A Noncooperative View The RAND Journal of Economics, Vol. 20, No. 1. (Spring, 1989), pp.
113–124, this particular discussion is on page 118
[3] Christian Borgs, Jennifer Chayes, Nicole Immorlica, Adam Tauman Kalai, Vahab Mirrokni, and Christos Papadimitriou (2007). "The Myth
of the Folk Theorem" (http:/ / research. microsoft. com/ en-us/ um/ people/ borgs/ Papers/ myth. pdf). .

References
• Friedman, J. (1971), "A non-cooperative equilibrium for supergames", Review of Economic Studies 38 (1): 1–12,
doi:10.2307/2296617, JSTOR 2296617.
• Rubinstein, Ariel (1979), "Equilibrium in Supergames with the Overtaking Criterion", Journal of Economic
Theory 21: 1–9, doi:10.1016/0022-0531(79)90002-4
• Mas-Colell, A., Whinston, M and Green, J. (1995) Micreoconomic Theory, Oxford University Press, New York
(readable; suitable for advanced undergraduates.)
• Tirole, J. (1988) The Theory of Industrial Organization, MIT Press, Cambridge MA (An organized introduction
to industrial organization)
• Ratliff, J. (1996). A Folk Theorem Sampler (http://www.virtualperfection.com/gametheory/5.3.
FolkTheoremSampler.1.0.pdf). Great introductory notes to the Folk Theorem.

Revelation principle
The revelation principle of economics can be stated as, "To any Bayesian Nash equilibrium of a game of
incomplete information, there exists a payoff-equivalent revelation mechanism that has an equilibrium where the
players truthfully report their types."[1]
For dominant strategies, instead of Bayesian equilibrium, the revelation principle was introduced by Gibbard (1973).
Later this principle was extended to the broader solution concept of Bayesian equilibrium by Dasgupta, Hammond
and Maskin (1979), Holmstrom (1977), and Myerson (1979).
The revelation principle is useful in game theory, Mechanism design, social welfare and auctions. William Vickrey,
winner of the 1996 Nobel Prize for Economics, devised an auction type where the highest bidder would win the
sealed bid auction, but at the price offered by the second-highest bidder. Under this system, the highest bidder would
be better motivated to reveal his maximum price than in traditional auctions, which would also benefit the seller.
This is sometimes called a second price auction or a Vickrey auction.
In Mechanism design the revelation principle is of utmost importance in finding solutions. The researcher need only
look at the set of equilibrium characterized by incentive compatibility. That is, if the mechanism designer wants to
implement some outcome or property, he can restrict his search to mechanisms in which agents are willing to reveal
their private information to the mechanism designer that has that outcome or property. If no such direct and truthful
mechanism exists, no mechanism can implement this outcome/property. By narrowing the area needed to be
searched, the problem of finding a mechanism becomes much easier.
Revelation principle 212

In correlated equilibrium
The revelation principle says that for every arbitrary coordinating device a.k.a. correlating there exists another direct
device for which the state space equals the action space of each player. Then the coordination is done by directly
informing each player of his action.

Notes
[1] Robert Gibbons, Game theory for applied economists, pag. 165

References
• Dasgupta, P., Hammond, P. and Maskin, E. 1979. The implementation of social choice rules: some results on
incentive compatibility. Review of Economic Studies 46, 185–216.
• Gibbard, A. 1973. Manipulation of voting schemes: a general result. Econometrica 41, 587–601.
• Holmstrom, B. 1977. On incentives and control in organizations. Ph.D. thesis, Stanford University.
• Myerson, R. 1979. Incentive-compatibility and the bargaining problem. Econometrica 47, 61–73.
• Robert Gibbons, Game theory for applied economists (http://books.google.it/books?id=8ygxf2WunAIC&
pg=PA164&dq=revelation+principle&hl=it&ei=om1uTaTVH5Sp8QPU9MiDDw&sa=X&oi=book_result&
ct=book-thumbnail&resnum=1&ved=0CCsQ6wEwAA#v=onepage&q&f=false), Princeton University Press,
1992

Arrow's impossibility theorem


In social choice theory, Arrow’s impossibility theorem, the General Possibility Theorem, or Arrow’s paradox,
states that, when voters have three or more distinct alternatives (options), no voting system can convert the ranked
preferences of individuals into a community-wide (complete and transitive) ranking while also meeting a certain set
of criteria. These criteria are called unrestricted domain, non-dictatorship, Pareto efficiency, and independence of
irrelevant alternatives. The theorem is often cited in discussions of election theory as it is further interpreted by the
Gibbard–Satterthwaite theorem.
The theorem is named after economist Kenneth Arrow, who demonstrated the theorem in his Ph.D. thesis and
popularized it in his 1951 book Social Choice and Individual Values. The original paper was titled "A Difficulty in
the Concept of Social Welfare".[1] Arrow was a co-recipient of the 1972 Nobel Memorial Prize in Economics.
In short, the theorem proves that no voting system can be designed that satisfies these three "fairness" criteria:
• If every voter prefers alternative X over alternative Y, then the group prefers X over Y.
• If every voter's preference between X and Y remains unchanged, then the group's preference between X and Y
will also remain unchanged (even if voters' preferences between other pairs like X and Z, Y and Z, or Z and W
change).
• There is no "dictator": no single voter possesses the power to always determine the group's preference.
There are several voting systems that side-step these requirements by using cardinal utility (which conveys more
information than rank orders) and weakening the notion of independence (see the subsection discussing the cardinal
utility approach to overcoming the negative conclusion). Arrow, like many economists, rejected cardinal utility as a
meaningful tool for expressing social welfare, and so focused his theorem on preference rankings.
The axiomatic approach Arrow adopted can treat all conceivable rules (that are based on preferences) within one
unified framework. In that sense, the approach is qualitatively different from the earlier one in voting theory, in
which rules were investigated one by one. One can therefore say that the contemporary paradigm of social choice
theory started from this theorem.[2]
Arrow's impossibility theorem 213

Statement of the theorem


The need to aggregate preferences occurs in many different disciplines: in welfare economics, where one attempts to
find an economic outcome which would be acceptable and stable; in decision theory, where a person has to make a
rational choice based on several criteria; and most naturally in voting systems, which are mechanisms for extracting
a decision from a multitude of voters' preferences.
The framework for Arrow's theorem assumes that we need to extract a preference order on a given set of options
(outcomes). Each individual in the society (or equivalently, each decision criterion) gives a particular order of
preferences on the set of outcomes. We are searching for a preferential voting system, called a social welfare
function (preference aggregation rule), which transforms the set of preferences (profile of preferences) into a single
global societal preference order. The theorem considers the following properties, assumed to be reasonable
requirements of a fair voting method:
Non-dictatorship
The social welfare function should account for the wishes of multiple voters. It cannot simply mimic the
preferences of a single voter.
Unrestricted domain
(or universality) For any set of individual voter preferences, the social welfare function should yield a unique
and complete ranking of societal choices. Thus:
• It must do so in a manner that results in a complete ranking of preferences for society.
• It must deterministically provide the same ranking each time voters' preferences are presented the same way.
Independence of irrelevant alternatives (IIA)
The social preference between x and y should depend only on the individual preferences between x and y
(Pairwise Independence). More generally, changes in individuals' rankings of irrelevant alternatives (ones
outside a certain subset) should have no impact on the societal ranking of the subset. (See Remarks below.)
Positive association of social and individual values
(or monotonicity) If any individual modifies his or her preference order by promoting a certain option, then
the societal preference order should respond only by promoting that same option or not changing, never by
placing it lower than before. An individual should not be able to hurt an option by ranking it higher.
Non-imposition
(or citizen sovereignty) Every possible societal preference order should be achievable by some set of
individual preference orders. This means that the social welfare function is surjective: It has an unrestricted
target space.
Arrow's theorem says that if the decision-making body has at least two members and at least three options to decide
among, then it is impossible to design a social welfare function that satisfies all these conditions at once.
A later (1963) version of Arrow's theorem can be obtained by replacing the monotonicity and non-imposition criteria
with:
Pareto efficiency
(or unanimity) If every individual prefers a certain option to another, then so must the resulting societal
preference order. This, again, is a demand that the social welfare function will be minimally sensitive to the
preference profile.
The later version of this theorem is stronger—has weaker conditions—since monotonicity, non-imposition, and
independence of irrelevant alternatives together imply Pareto efficiency, whereas Pareto efficiency, non-imposition,
and independence of irrelevant alternatives together do not imply monotonicity.
Remarks on IIA
Arrow's impossibility theorem 214

1. The IIA condition can be justified for three reasons (Mas-Colell, Whinston, and Green, 1995, page 794): (i)
normative (irrelevant alternatives should not matter), (ii) practical (use of minimal information), and (iii) strategic
(providing the right incentives for the truthful revelation of individual preferences). Though the strategic property
is conceptually different from IIA, it is closely related.
2. Arrow's death-of-a-candidate example (1963, page 26) suggests that the agenda (the set of feasible alternatives)
shrinks from, say, X = {a, b, c} to S = {a, b} because of the death of candidate c. This example is misleading
since it can give the reader an impression that IIA is a condition involving two agenda and one profile. The fact is
that IIA involves just one agendum ({x, y} in case of Pairwise Independence) but two profiles. If the condition is
applied to this confusing example, it requires this: Suppose an aggregation rule satisfying IIA chooses b from the
agenda {a, b} when the profile is given by (cab, cba), that is, individual 1 prefers c to a to b, 2 prefers c to b to a.
Then, it must still choose b from {a, b} if the profile were, say, (abc, bac) or (acb, bca) or (acb, cba) or (abc, cba).

Formal statement of the theorem


Let be a set of outcomes, a number of voters or decision criteria. We shall denote the set of all full linear
orderings of by .
A (strict) social welfare function (preference aggregation rule) is a function which
[3]
aggregates voters' preferences into a single preference order on . The -tuple of voters'
preferences is called a preference profile. In its strongest and most simple form, Arrow's impossibility theorem states
that whenever the set of possible alternatives has more than 2 elements, then the following three conditions
become incompatible:
unanimity, or Pareto efficiency
If alternative a is ranked above b for all orderings , then a is ranked higher than b by
. (Note that unanimity implies non-imposition).
non-dictatorship
There is no individual i whose preferences always prevail. That is, there is no such that
.
independence of irrelevant alternatives
For two preference profiles and such that for all individuals i, alternatives a
and b have the same order in as in , alternatives a and b have the same order in
as in .

Informal proof
Based on the proof by John Geanakoplos of Cowles Foundation, Yale University.[4]
We wish to prove that any social choice system respecting unrestricted domain, unanimity, and independence of
irrelevant alternatives (IIA) is a dictatorship.

Part one: there is a "pivotal" voter for B


Say there are three choices for society, call them A, B, and C. Suppose first that everyone prefers option B the least.
That is, everyone prefers every other option to B. By unanimity, society must prefer every option to B. Specifically,
society prefers A and C to B. Call this situation Profile 1.
On the other hand, if everyone preferred B to everything else, then society would have to prefer B to everything else
by unanimity. So it is clear that, if we take Profile 1 and, running through the members in the society in some
arbitrary but specific order, move B from the bottom of each person's preference list to the top, there must be some
point at which B moves off the bottom of society's preferences as well, since we know it eventually ends up at the
Arrow's impossibility theorem 215

top. When it happens, we call that voter as pivotal voter.


We now want to show that, at the point when the pivotal voter n moves B off the bottom of his preferences to the
top, the society's B moves to the top of its preferences as well, not to an intermediate point.
To prove this, consider what would happen if it were not true. Then, after n has moved B to the top (i.e., when voters
have B at the top and voters still have B at the bottom) society would have some
option it prefers to B, say A, and one less preferable than B, say C.
Now if each person moves his preference for C above A, then society would prefer C to A by unanimity. But
moving C above A should not change anything about how B and C compare, by independence of irrelevant
alternatives. That is, since B is either at the very top or bottom of each person's preferences, moving C or A around
does not change how either compares with B, leaving B preferred to C. Similarly, by independence of irrelevant
alternatives society still prefers A to B because the changing of C and A does not affect how A and B compare.
Since C is above A, and A is above B, C must be above B in the social preference ranking. We have reached an
absurd conclusion.
Therefore, when the voters have moved B from the bottom of their preferences to the top, society
moves B from the bottom all the way to the top, not some intermediate point.
Note that even with a different starting profile, say Profile 1' , if the order of moving preference of B is unchanged,
the pivotal voter remains n. That is, the pivotal voter is determined only by the moving order, and not by the starting
profile.
It can be seen as following. If we concentrate on a pair of B and one of other choices, during each step on the
process, preferences in the pair are unchanged whether we start from Profile 1 and Profile 1' for every person.
Therefore by IIA, preference in the pair should be unchanged. Since it applies to every other choices, for Profile 1' ,
the position of B remains at bottom before n and remains at top after and including n, just as Profile 1.

Part two: voter n is a dictator for A–C


We show that voter n dictates society's decision between A and C. In other words, we show that n is a (local)
dictator over the set {A, C} in the following sense: if n prefers A to C, then the society prefers A to C and if n
prefers C to A, then the society prefers C to A.
Let p1 be any profile in which voter n prefers A to C. We show that society prefers A to C. To show that, construct
two profiles from p1 by changing the position of B as follows: In Profile 2, all voters up to (not including) n have B
at the top of their preferences and the rest (including n) have B at the bottom. In Profile 3, all voters up to (and
including) n have B at the top and the rest have B at the bottom.
Now consider the profile p4 obtained from p1 as follows: everyone up to n ranks B at the top, n ranks A above B
above C, and everyone else ranks B at the bottom. As far as the A–B decision is concerned, p4 is just as in Profile 2,
which we proved puts A above B (in Profile 2, B is actually at the bottom of the social ordering). C's new position is
irrelevant to the B–A ordering for society because of IIA. Likewise, p4 has a relationship between B and C that is
just as in Profile 3, which we proved has B above C (B is actually at the top). We can conclude from these two
observations that society puts A above B above C at p4. Since the relative rankings of A and C are the same across
p1 and p4, we conclude that society puts A above C at p1.
Similarly, we can show that if q1 is any profile in which voter n prefers C to A, then society prefers C to A. It
follows that person n is a (local) dictator over {A, C}.
Remark. Since B is irrelevant (IIA) to the decision between A and C, the fact that we assumed particular profiles
that put B in particular places does not matter. This was just a way of finding out, by example, who the dictator over
A and C was. But all we need to know is that he exists.
Arrow's impossibility theorem 216

Part three: there can be at most one dictator


Finally, we show that the (local) dictator over {A, C} is a (global) dictator: he also dictates over {A, B} and over {B,
C}. We will use the fact (which can be proved easily) that if is a strict linear order, then it contains no cycles
such as . We have proved in Part two that there are (local) dictators i over {A, B}, j over {B,
C}, and k over {A, C}.
• If i, j, k are all distinct, consider any profile in which i prefers A to B, j prefers B to C and k prefers C to A. Then
the society prefers A to B to C to A, a contradiction.
• If one of i, j, k is different and the other two are equal, assume i=j without loss of generality. Consider any profile
in which i=j prefers A to B to C and k prefers C to A. Then the society prefers A to B to C to A, a contradiction.
It follows that i=j=k, establishing that the local dictator over {A, C} is a global one.

Interpretations of the theorem


Arrow's theorem is a mathematical result, but it is often expressed in a non-mathematical way with a statement such
as "No voting method is fair", "Every ranked voting method is flawed", or "The only voting method that isn't flawed
is a dictatorship". These statements are simplifications of Arrow's result which are not universally considered to be
true. What Arrow's theorem does state is that a deterministic preferential voting mechanism - that is, one where a
preference order is the only information in a vote, and any possible set of votes gives a unique result - cannot comply
with all of the conditions given above simultaneously.
Arrow did use the term "fair" to refer to his criteria. Indeed, Pareto efficiency, as well as the demand for
non-imposition, seems acceptable to most people.
Various theorists have suggested weakening the IIA criterion as a way out of the paradox. Proponents of ranked
voting methods contend that the IIA is an unreasonably strong criterion. It is the one breached in most useful voting
systems. Advocates of this position point out that failure of the standard IIA criterion is trivially implied by the
possibility of cyclic preferences. If voters cast ballots as follows:
• 1 vote for A > B > C
• 1 vote for B > C > A
• 1 vote for C > A > B
then the pairwise majority preference of the group is that A wins over B, B wins over C, and C wins over A: these
yield rock-paper-scissors preferences for any pairwise comparison. In this circumstance, any aggregation rule that
satisfies the very basic majoritarian requirement that a candidate who receives a majority of votes must win the
election, will fail the IIA criterion, if social preference is required to be transitive (or acyclic). To see this, suppose
that such a rule satisfies IIA. Since majority preferences are respected, the society prefers A to B (two votes for A>B
and one for B>A), B to C, and C to A. Thus a cycle is generated, which contradicts the assumption that social
preference is transitive.
So, what Arrow's theorem really shows is that any majority-wins voting system is a non-trivial game, and that game
theory should be used to predict the outcome of most voting mechanisms.[5] This could be seen as a discouraging
result, because a game need not have efficient equilibria, e.g., a ballot could result in an alternative nobody really
wanted in the first place, yet everybody voted for.
Remark: Scalar rankings from a vector of attributes and the IIA property. The IIA property might not be
satisfied in human decision-making of realistic complexity because the scalar preference ranking is effectively
derived from the weighting—not usually explicit—of a vector of attributes (one book dealing with the Arrow
theorem invites the reader to consider the related problem of creating a scalar measure for the track and field
decathlon event—e.g. how does one make scoring 600 points in the discus event "commensurable" with scoring 600
points in the 1500 m race) and this scalar ranking can depend sensitively on the weighting of different attributes,
with the tacit weighting itself affected by the context and contrast created by apparently "irrelevant" choices. Edward
Arrow's impossibility theorem 217

MacNeal discusses this sensitivity problem with respect to the ranking of "most livable city" in the chapter
"Surveys" of his book MathSemantics: making numbers talk sense (1994).

Other possibilities
In an attempt to escape from the negative conclusion of Arrow's theorem, social choice theorists have investigated
various possibilities ("ways out"). These investigations can be divided into the following two:
• those investigating functions whose domain, like that of Arrow's social welfare functions, consists of profiles of
preferences;
• those investigating other kinds of rules.

Approaches investigating functions of preference profiles


This section includes approaches that deal with
• aggregation rules (functions that map each preference profile into a social preference), and
• other functions, such as functions that map each preference profile into an alternative.
Since these two approaches often overlap, we discuss them at the same time. What is characteristic of these
approaches is that they investigate various possibilities by eliminating or weakening or replacing one or more
conditions (criteria) that Arrow imposed.

Infinitely many individuals


Several theorists (e.g., Kirman and Sondermann, 1972[6] ) point out that when one drops the assumption that there
are only finitely many individuals, one can find aggregation rules that satisfy all of Arrow's other conditions.
However, such aggregation rules are practically of limited interest, since they are based on ultrafilters, highly
nonconstructive mathematical objects. In particular, Kirman and Sondermann argue that there is an "invisible
dictator" behind such a rule. Mihara (1997,[7] 1999[8] ) shows that such a rule violates algorithmic computability.[9]
These results can be seen to establish the robustness of Arrow's theorem.[10]

Limiting the number of alternatives


When there are only two alternatives to choose from, May's theorem shows that only simple majority rule satisfies a
certain set of criteria (e.g., equal treatment of individuals and of alternatives; increased support for a winning
alternative should not make it into a losing one). On the other hand, when there are at least three alternatives,
Arrow's theorem points out the difficulty of collective decision making. Why is there such a sharp difference
between the case of less than three alternatives and that of at least three alternatives?
Nakamura's theorem (about the core of simple games) gives an answer more generally. It establishes that if the
number of alternatives is less than a certain integer called the Nakamura number, then the rule in question will
identify "best" alternatives without any problem; if the number of alternatives is greater or equal to the Nakamura
number, then the rule will not always work, since for some profile a voting paradox (a cycle such as alternative A
socially preferred to alternative B, B to C, and C to A) will arise. Since the Nakamura number of majority rule is 3
(except the case of four individuals), one can conclude from Nakamura's theorem that majority rule can deal with up
to two alternatives rationally. Some super-majority rules (such as those requiring 2/3 of the votes) can have a
Nakamura number greater than 3, but such rules violate other conditions given by Arrow.[11]
Remark. A common way "around" Arrow's paradox is limiting the alternative set to two alternatives. Thus,
whenever more than two alternatives should be put to the test, it seems very tempting to use a mechanism that pairs
them and votes by pairs. As tempting as this mechanism seems at first glance, it is generally far from meeting even
the Pareto principle, not to mention IIA. The specific order by which the pairs are decided strongly influences the
outcome. This is not necessarily a bad feature of the mechanism. Many sports use the tournament
Arrow's impossibility theorem 218

mechanism—essentially a pairing mechanism—to choose a winner. This gives considerable opportunity for weaker
teams to win, thus adding interest and tension throughout the tournament. This means that the person controlling the
order by which the choices are paired (the agenda maker) has great control over the outcome. In any case, when
viewing the entire voting process as one game, Arrow's theorem still applies.

Domain restrictions
Another approach is relaxing the universality condition, which means restricting the domain of aggregation rules.
The best-known result along this line assumes "single peaked" preferences.
Duncan Black has shown that if there is only one dimension on which every individual has a "single-peaked"
preference, then all of Arrow's conditions are met by majority rule. Suppose that there is some predetermined linear
ordering of the alternative set. An individual's preference is single-peaked with respect to this ordering if he has
some special place that he likes best along that line, and his dislike for an alternative grows larger as the alternative
goes further away from that spot (i.e., the graph of his utility function has a single peak if alternatives are placed
according to the linear ordering on the horizontal axis). For example, if voters were voting on where to set the
volume for music, it would be reasonable to assume that each voter had their own ideal volume preference and that
as the volume got progressively too loud or too quiet they would be increasingly dissatisfied. If the domain is
restricted to profiles in which every individual has a single peaked preference with respect to the linear ordering,
then simple ([] ) aggregation rules, which includes majority rule, have an acyclic (defined below) social preference,
hence "best" alternatives.[12] In particular, when there are odd number of individuals, then the social preference
becomes transitive, and the socially "best" alternative is equal to the median of all the peaks of the individuals
(Black's median voter theorem[13] ). Under single-peaked preferences, the majority rule is in some respects the most
natural voting mechanism.
One can define the notion of "single-peaked" preferences on higher dimensional sets of alternatives. However, one
can identify the "median" of the peaks only in exceptional cases. Instead, we typically have the destructive situation
suggested by McKelvey's Chaos Theorem (1976[14] ): for any x and y, one can find a sequence of alternatives such
that x is beaten by by a majority, by , , by y.

Relaxing transitivity
By relaxing the transitivity of social preferences, we can find aggregation rules that satisfy Arrow's other conditions.
If we impose neutrality (equal treatment of alternatives) on such rules, however, there exists an individual who has a
"veto". So the possibility provided by this approach is also very limited.
First, suppose that a social preference is quasi-transitive (instead of transitive); this means that the strict preference
("better than") is transitive: if and , then . Then, there do exist non-dictatorial
aggregation rules satisfying Arrow's conditions, but such rules are oligarchic (Gibbard, 1969). This means that there
exists a coalition L such that L is decisive (if every member in L prefers x to y, then the society prefers x to y), and
each member in L has a veto (if she prefers x to y, then the society cannot prefer y to x).
Second, suppose that a social preference is acyclic (instead of transitive): there does not exist alternatives
that form a cycle ( , , , , ). Then, provided that there
are at least as many alternatives as individuals, an aggregation rule satisfying Arrow's other conditions is collegial
(Brown, 1975[15] ). This means that there are individuals who belong to the intersection ("collegium") of all decisive
coalitions. If there is someone who has a veto, then he belongs to the collegium. If the rule is assumed to be neutral,
then it does have someone who has a veto.[]
Finally, Brown's theorem left open the case of acyclic social preferences where the number of alternatives is less
than the number of individuals. One can give a definite answer for that case using the Nakamura number. See
#Limiting the number of alternatives.
Arrow's impossibility theorem 219

Relaxing IIA
There are numerous examples of aggregation rules satisfying Arrow's conditions except IIA. The Borda rule is one
of them. These rules, however, are susceptible to strategic manipulation by individuals (Blair and Muller, 1983[16] ).
See also Interpretations of the theorem below.

Relaxing the Pareto criterion


Wilson (1972) shows that if an aggregation rule is non-imposed and non-null, then there is either a dictator or an
inverse dictator, provided that Arrow's conditions other than Pareto are also satisfied. Here, an inverse dictator is an
individual i such that whenever i prefers x to y, then the society prefers y to x.
Remark. Amartya Sen offered both relaxation of transitivity and removal of the Pareto principle.[17] He
demonstrated another interesting impossibility result, known as the "impossibility of the Paretian Liberal". (See
liberal paradox for details). Sen went on to argue that this demonstrates the futility of demanding Pareto optimality in
relation to voting mechanisms.

Social choice instead of social preference


In social decision making, to rank all alternatives is not usually a goal. It often suffices to find some alternative. The
approach focusing on choosing an alternative investigates either social choice functions (functions that map each
preference profile into an alternative) or social choice rules (functions that map each preference profile into a subset
of alternatives).
As for social choice functions, the Gibbard–Satterthwaite theorem is well-known, which states that if a social choice
function whose range contains at least three alternatives is strategy-proof, then it is dictatorial.
As for social choice rules, we should assume there is a social preference behind them. That is, we should regard a
rule as choosing the maximal elements ("best" alternatives) of some social preference. The set of maximal elements
of a social preference is called the core. Conditions for existence of an alternative in the core have been investigated
in two approaches. The first approach assumes that preferences are at least acyclic (which is necessary and sufficient
for the preferences to have a maximal element on any finite subset). For this reason, it is closely related to #Relaxing
transitivity. The second approach drops the assumption of acyclic preferences. Kumabe and Mihara (2011[18] ) adopt
this approach. They make a more direct assumption that individual preferences have maximal elements, and examine
conditions for the social preference to have a maximal element. See Nakamura number for details of these two
approaches.

Rated voting systems and other approaches


Arrow's framework assumes that individual and social preferences are "orderings" (i.e., satisfy completeness and
transitivity) on the set of alternatives. This means that if the preferences are represented by a utility function, its
value is an ordinal utility in the sense that it is meaningful so far as the greater value indicates the better alternative.
For instance, having ordinal utilities of 4, 3, 2, 1 for alternatives a, b, c, d, respectively, is the same as having 1000,
100.01, 100, 0, which in turn is the same as having 99, 98, 1, .997. They all represent the ordering in which a is
preferred to b to c to d. The assumption of ordinal preferences, which precludes interpersonal comparisons of utility,
is an integral part of Arrow's theorem.
For various reasons, an approach based on cardinal utility, where the utility has a meaning beyond just giving a
ranking of alternatives, is not common in contemporary economics. However, once one adopts that approach, one
can take intensities of preferences into consideration, or one can compare (i) gains and losses of utility or (ii) levels
of utility, across different individuals. In particular, Harsanyi (1955) gives a justification of utilitarianism (which
evaluates alternatives in terms of the sum of individual utilities), originating from Jeremy Bentham. Hammond
(1976) gives a justification of the maximin principle (which evaluates alternatives in terms of the utility of the
worst-off individual), originating from John Rawls.
Arrow's impossibility theorem 220

Not all voting methods use, as input, only an ordering of all candidates.[19] Methods which don't, often called "rated"
or "cardinal" (as opposed to "ranked", "ordinal", or "preferential") voting systems, can be viewed as using
information that only cardinal utility can convey. In that case, it is not surprising if some of them satisfy all of
Arrow's conditions that are reformulated.[20] Warren Smith claims that range voting is such a method.[21] [22]
Whether such a claim is correct depends on how each condition is reformulated.[23] Other rated voting systems
which pass certain generalizations of Arrow's criteria include Approval voting and Majority Judgment. Note that
although Arrow's theorem does not apply to such methods, the Gibbard–Satterthwaite theorem still does: no system
is fully strategy-free, so the informal dictum that "no voting system is perfect" still has a mathematical basis.
Finally, though not an approach investigating some kind of rules, there is a criticism by James M. Buchanan and
others. It argues that it is silly to think that there might be social preferences that are analogous to individual
preferences. Arrow (1963, Chapter 8) answers this sort of criticisms seen in the early period, which come at least
partly from misunderstanding.

Notes
[1] Arrow, K.J., " A Difficulty in the Concept of Social Welfare (http:/ / gatton. uky. edu/ Faculty/ hoytw/ 751/ articles/ arrow. pdf)", Journal of
Political Economy 58(4) (August, 1950), pp. 328–346.
[2] Suzumura, 2002,Suzumura, Kōtarō; Arrow, Kenneth Joseph; Sen, Amartya Kumar (2002). Handbook of social choice and welfare, vol 1.
Amsterdam, Netherlands: Elsevier. ISBN 978-0-444-82914-6. Introduction, page 10.
[3] Note that by definition, a social welfare function as defined here satisfies the Unrestricted domain condition. Restricting the range to the
social preferences that are never indifferent between distinct outcomes is probably a very restrictive assumption, but the goal here is to give a
simple statement of the theorem. Even if the restriction is relaxed, the impossibility result will persist.
[4] Three Brief Proofs of Arrow’s Impossibility Theorem (http:/ / ideas. repec. org/ p/ cwl/ cwldpp/ 1123r3. html)
[5] This does not mean various normative criteria will be satisfied if we use equilibrium concepts in game theory. Indeed, the mapping from
profiles to equilibrium outcomes defines a social choice rule, whose performance can be investigated by social choice theory. See
Austen-Smith and Banks (1999Austen-Smith, David; Banks, Jeffrey S. (1999). Positive political theory I: Collective preference (http:/ /
books. google. com/ books?id=nxXDn3nPxIAC& q="nakamura+ number"). Ann Arbor: University of Michigan Press.
ISBN 978-0-472-08721-1.), Section 7.2.
[6] doi: 10.1016/0022-0531(72)90106-8
This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand (http:/ / en. wikipedia. org/
wiki/ Template:cite_doi/ _10. 1016. 2f0022-0531. 2872. 2990106-8?preload=Template:Cite_doi/ preload& editintro=Template:Cite_doi/
editintro& action=edit)
[7] Mihara, H. R. (1997). "Arrow's Theorem and Turing computability" (http:/ / econpapers. repec. org/ paper/ wpawuwppe/ 9408001. htm).
Economic Theory 10 (2): 257–276. doi:10.1007/s001990050157. .
[8] Mihara, H. R. (1999). "Arrow's theorem, countably many agents, and more visible invisible dictators" (http:/ / econpapers. repec. org/ paper/
wpawuwppe/ 9705001. htm). Journal of Mathematical Economics 32: 267–277. doi:10.1016/S0304-4068(98)00061-5. .
[9] Mihara's definition of a computable aggregation rule is based on computability of a simple game (see Rice's theorem).
[10] See Taylor (2005,Taylor, Alan D. (2005). Social choice and the mathematics of manipulation. New York: Cambridge University Press.
ISBN 0-521-00883-2. Chapter 6) for a concise discussion of social choice for infinite societies.
[11] Austen-Smith and Banks (1999, Chapter 3) gives a detailed discussion of the approach trying to limit the number of alternatives.
[12] Indeed, many different social welfare functions can meet Arrow's conditions under such restrictions of the domain. It has been proved,
however, that under any such restriction, if there exists any social welfare function that adheres to Arrow's criteria, then the majority rule will
adhere to Arrow's criteria.Campbell, D.E., Kelly, J.S., "A simple characterization of majority rule", Economic Theory 15 (2000), pp. 689–700.
[13] Black, Duncan (1968). The theory of committees and elections. Cambridge, Eng.: University Press. ISBN 0-89838-189-4.
[14] McKelvey, R. (1976). "Intransitivities in multidimensional voting models and some implications for agenda control". Journal of Economic
Theory 12 (3): 472–482. doi:10.1016/0022-0531(76)90040-5.
[15] Brown, D. J. (1975). Aggregation of Preferences. Quarterly Journal of Economics 89: 456-469.
[16] Blair, D. (1983). "Essential aggregation procedures on restricted domains of preferences*1". Journal of Economic Theory 30: 34–00.
doi:10.1016/0022-0531(83)90092-3.
[17] Sen, Amartya. 1979. Personal Utilities and Public Judgements: Or What's Wrong With Welfare Economics. The Economic Journal, 89,
537-588.
[18] doi: 10.1016/j.geb.2010.06.008
This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand (http:/ / en. wikipedia. org/
wiki/ Template:cite_doi/ _10. 1016. 2fj. geb. 2010. 06. 008?preload=Template:Cite_doi/ preload& editintro=Template:Cite_doi/ editintro&
action=edit)
Arrow's impossibility theorem 221

[19] It is sometimes asserted that such methods may trivially fail the universality criterion. However, it is more appropriate to consider that such
methods fails Arrow's definition of an aggregation rule (or that of a function whose domain consists of preference profiles), if preference
orderings cannot uniquely translate into a ballot.
[20] However, a modified version of Arrow's theorem may still apply to such methods (e.g., Brams and Fishburn, 2002, Chapter 4, Theorem 4.2).
[21] Warren D. Smith, et al.. "How can range voting accomplish the impossible?" (http:/ / rangevoting. org/ ArrowThm. html). .
[22] New Scientist 12 April 2008 pages 30-33
[23] No voting method that nontrivially uses cardinal utility satisfies Arrow's IIA (in which preference profiles are replaced by lists of ballots or
lists of utilities). For this reason, a weakened notion of IIA is proposed (e.g., Sen, 1979,Sen, Amartya Kumar (1979). Collective choice and
social welfare. Amsterdam: North-Holland. ISBN 978-0-444-85127-7. page 129). The notion requires that the social ranking of two
alternatives depend only on the levels of utility attained by individuals at the two alternatives. (More formally, a social welfare functional F is
a function that maps each list of utility functions into a social preference. F satisfies IIA (for social welfare
functionals) if for all lists u, u' and for all alternatives x, y, if and for all i, then xF(u)y iff xF(u')y.)
Many cardinal voting methods (including Range voting) satisfy the weakened version of IIA.

References
• Campbell, D.E. and Kelly, J.S. (2002) Impossibility theorems in the Arrovian framework, in Handbook of social
choice and welfare (ed. by Kenneth J. Arrow, Amartya K. Sen and Kotaro Suzumura), volume 1, pages 35–94,
Elsevier. Surveys many of approaches discussed in #Approaches investigating functions of preference profiles.
• The Mathematics of Behavior by Earl Hunt, Cambridge University Press, 2007. The chapter "Defining
Rationality: Personal and Group Decision Making" has a detailed discussion of the Arrow Theorem, with proof.
URL to CUP information on this book (http://www.cambridge.org/9780521850124)
• Why flip a coin? : the art and science of good decisions by Harold W. Lewis, John Wiley, 1997. Gives explicit
examples of preference rankings and apparently anomalous results under different voting systems. States but does
not prove Arrow's theorem. ISBN 0-471-29645-7
• Sen, A. K. (1979) “Personal utilities and public judgements: or what's wrong with welfare economics?” The
Economic Journal, 89, 537-558, arguing that Arrow's theorem was wrong because it did not incorporate
non-utility information and the utility information it did allow was impoverished http://www.jstor.org/stable/
2231867

External links
• Three Brief Proofs of Arrow’s Impossibility Theorem (http://ideas.repec.org/p/cwl/cwldpp/1123r3.html)
• A Pedagogical Proof of Arrow’s Impossibility Theorem (http://repositories.cdlib.org/ucsdecon/1999-25/)
• Another Graphical Proof of Arrow’s Impossibility Theorem (http://www.jstor.org/stable/1183438)
222

Additional Reading

Tragedy of the commons


The tragedy of the commons is a dilemma
arising from the situation in which multiple
individuals, acting independently and
rationally consulting their own self-interest,
will ultimately deplete a shared limited
resource, even when it is clear that it is not
in anyone's long-term interest for this to
happen. This dilemma was first described in
an influential article titled "The Tragedy of
the Commons", written by ecologist Garrett
Hardin and first published in the journal
Science in 1968.[1]

Hardin's Commons Theory is frequently


cited to support the notion of sustainable
Cows on Selsley Common. The "tragedy of the commons" is one way of
development, meshing economic growth accounting for overexploitation.
and environmental protection, and has had
an effect on numerous current issues, including the debate over global warming. An asserted impending "tragedy of
the commons" is frequently warned of as a consequence for adopting policies which restrict private property and
espouse expansion of public property.[2] [3]

Central to Hardin's article is an example (first sketched in an 1833 pamphlet by William Forster Lloyd) involving
medieval land tenure in Europe, of herders sharing a common parcel of land, on which they are each entitled to let
their cows graze. In Hardin's example, it is in each herder's interest to put the next (and succeeding) cows he acquires
onto the land, even if the quality of the common is damaged for all as a result, through overgrazing. The herder
receives all of the benefits from an additional cow, while the damage to the common is shared by the entire group. If
all herders make this individually rational economic decision, the common will be depleted or even destroyed, to the
detriment of all. Hardin also cites modern examples, including the overfishing of the world's oceans and ranchers
who graze their cattle on government lands in the American West.[1]
A similar dilemma of the commons had been discussed by agrarian reformers since the 18th century.[4] Hardin's
predecessors used the alleged tragedy, as well as a variety of examples from the Greek Classics, to justify the
enclosure movement. German historian Joachim Radkau sees Garrett Hardin's writings as having a different aim in
that Hardin asks for a strict management of common goods via increased government involvement or/and
international regulation bodies.[4]
Hardin's work has been criticised on the grounds of historical inaccuracy, and for failing to distinguish between
common property and open access resources. While Hardin recommended that the tragedy of the commons could be
prevented by either more government regulation or privatizing the commons property, subsequent Nobel
Prize-winning work by Elinor Ostrom suggests that handing control of local areas to national and international
regulators can create further problems.[5] [6] Ostrom argues that the tragedy of the commons may not be as prevalent
or as difficult to solve as Hardin implies, since locals have often come up with solutions to the commons problem
themselves; when the commons is taken over by non-locals, those solutions can no longer be used.[5] Ostrom
Tragedy of the commons 223

recognizes that there are real problems, and even limited situations where the tragedy of the commons applies to
real-world resource management.[7]
As Hardin acknowledged[8] there was a fundamental mistake in the use of the term “commons." This was already
noted in 1975 by Ciriacy-Wantrup & Bishop (1975: 714)[9] who wrote that we "are not free to use the concept
'common property resources' or 'commons' under conditions where no institutional arrangements exist. Common
property is not 'everybody's property' (...). To describe unowned resource (res nullius) as common property (res
communis), as many economists have done for years (...) is a self-contradiction." Neglecting the difference between
common property and open access resources is a major reason of confusion in the debate that followed the 1968
Hardin's article.

History of the idea

References to the Greek classics


Thucydides (ca. 460 BC-ca. 395 BC) stated: "[T]hey devote a very small fraction of time to the consideration of any
public object, most of it to the prosecution of their own objects. Meanwhile each fancies that no harm will come to
his neglect, that it is the business of somebody else to look after this or that for him; and so, by the same notion being
entertained by all separately, the common cause imperceptibly decays."[10]
Aristotle (384-322 BC) similarly argued against common goods of the polis of Athens: "That all persons call the
same thing mine in the sense in which each does so may be a fine thing, but it is impracticable; or if the words are
taken in the other sense, such a unity in no way conduces to harmony. And there is another objection to the proposal.
For that which is common to the greatest number has the least care bestowed upon it. Every one thinks chiefly of his
own, hardly at all of the common interest; and only when he is himself concerned as an individual. For besides other
considerations, everybody is more inclined to neglect the duty which he expects another to fulfill; as in families
many attendants are often less useful than a few."[11]

Later commentary
In the 16th century School of Salamanca, Luis de Molina observed that individual owners take better care of their
goods than they do of common property.
More recently, William Forster Lloyd noted the comparison with medieval village land holding in his 1833 book on
population.[12]
Such a notion is not merely an abstraction, but its consequences have manifested literally in such common grounds
as the Boston Common, where overgrazing led to discontinuation of the common's use as public grazing ground.[13]
Radkau gives further, more positive examples and alleges the "real tragedy of the commons" to be ruthless use of
common land motivated by agrarian reforms.[4]

Garrett Hardin's essay

Summary
At the beginning of his essay, Hardin draws attention to problems that cannot be solved by technical means, as
distinct from those with solutions that require "a change only in the techniques of the natural sciences, demanding
little or nothing in the way of change in human values or ideas of morality". Hardin contends that this class of
problems includes many of those raised by human population growth and the use of the Earth's natural resources.
The problem of population growth, Hardin asserts is endemic to society's inextricable ties to the welfare state.[14]
Hardin says that a world in which individuals rely on themselves and not on the relationship of society and man, how
many children a family would have would not be a public concern. Parents who breed excessively would leave fewer
descendants because they would be unable to provide for each child adequately. Such negative feedback is found in
Tragedy of the commons 224

the animal kingdom.[14] Hardin says that if the children of improvident parents starved to death, if overbreeding was
its own punishment--then there would be no public interest in controlling the breeding of families.[14] For Hardin, it
is the welfare state that allows the tragedy of the commons; where the state provides for children and supports
overbreeding as a fundamental human right, malthusian catastrophe is inevitable. Hardin laments this interpretation
of The Universal Declaration of Human Rights:
The Universal Declaration of Human Rights describes the family as the natural and fundamental unit of
society. [Article 16[15] ] It follows that any choice and decision with regard to the size of the family must
irrevocably rest with the family itself, and cannot be made by anyone else.
—U Thant, Statement on Population by UN Secretary-General[16]
This parental reproductive freedom was endorsed by the 1968 UN Proclamation of Tehran. Hardin advocates
repudiation of this element of the Proclamation.[14]
To make the case for "no technical solutions," Hardin notes the limits placed on the availability of energy (and
material resources) on Earth, and also the consequences of these limits for "quality of life." To maximize population,
one needs to minimize resources spent on anything other than simple survival, and vice versa. Consequently, he
concludes that there is no foreseeable technical solution to increasing both human populations and their standard of
living on a finite planet.
From this point, Hardin switches to non-technical or resource management solutions to population and resource
problems. As a means of illustrating these, he introduces a hypothetical example of a pasture shared by local herders,
which he calls a commons. Assuming that the herders only wish is yield maximization, they will increase their herd
size whenever possible. The marginal utility of each additional animal has both a positive and negative component:
• Positive: the herder receives all of the proceeds from each additional animal.
• Negative: the pasture is slightly degraded by each additional animal.
Crucially, division of these costs and benefits is unequal: the individual herder gains all of the advantage, but the
disadvantage is shared among all herders using the pasture. Assuming that the negative impact on the herder's other
animals is less than the income of a new one, the rational course of action for each individual herder will always be
herd expansion. Since all herders reach the same conclusion, overgrazing is inevitable. Each herder will continue to
impose costs on all of the others, until the pasture is depleted.
Because this sequence of events follows predictably from the behavior of the participants, Hardin describes it as a
"tragedy."
In the course of his essay, Hardin develops the theme, drawing in examples of latter day "commons," such as the
atmosphere, oceans, rivers, fish stocks, national parks, advertising, and even parking meters. The example of fish
stocks had led some to call this the "tragedy of the fishers."[17] Throughout the essay the impact of human population
growth is a concern, with the Earth's resources being a general common.
The essay addresses potential management solutions to commons problems including privatization, polluter pays,
and regulation. Keeping with his original pasture analogy, Hardin categorizes these as effectively the "enclosure" of
commons, and notes a historical progression from the use of all resources as commons (unregulated access to all) to
systems in which commons are "enclosed" and subject to methods of regulated use in which access is prohibited or
controlled. Hardin argues against relying on conscience as a means of policing commons, suggesting that this favors
selfish individuals — often known as free riders — over those who are more altruistic.
In the context of avoiding over-exploitation of common resources, Hardin concludes by restating Hegel's maxim
(which was quoted by Engels), "freedom is the recognition of necessity." He suggests that "freedom" completes the
tragedy of the commons. By recognizing resources as commons in the first place, and by recognizing that, as such,
they require management, Hardin believes that humans "can preserve and nurture other and more precious
freedoms."
Tragedy of the commons 225

Aside from its subject matter (resource use), the essay is notable (at least in modern scientific circles) for explicitly
dealing with issues of morality, and doing so in one of the scientific community's premier journals, Science. Indeed,
the subtitle for the essay is "The population problem has no technical solution; it requires a fundamental extension in
morality."

Meaning
The metaphor illustrates the argument that free access and unrestricted demand for a finite resource ultimately
reduces the resource through over-exploitation, temporarily or permanently. This occurs because the benefits of
exploitation accrue to individuals or groups, each of whom is motivated to maximize use of the resource to the point
in which they become reliant on it, while the costs of the exploitation are borne by all those to whom the resource is
available (which may be a wider class of individuals than those who are exploiting it). This, in turn, causes demand
for the resource to increase, which causes the problem to snowball to the point that the resource is depleted (even if it
retains a capacity to recover). The rate at which depletion of the resource is realized depends primarily on three
factors: the number of users wanting to consume the common in question, the consumptiveness of their uses, and the
relative robustness of the common.[18]
Like William Lloyd and Thomas Malthus before him, Hardin was primarily interested in the problem of human
population growth. In his essay he also focused on the use of larger (though still limited) resources such as the
Earth's atmosphere and oceans, as well as pointing out the "negative commons" of pollution (i.e., instead of dealing
with the deliberate privatization of a positive resource, a "negative commons" deals with the deliberate
commonization of a negative cost, pollution).
As a metaphor, the tragedy of the commons should not be taken too literally. The phrase is shorthand for a structural
relationship and the consequences of that relationship, not a precise description of it. The "tragedy" should not be
seen as tragic in the conventional sense, nor must it be taken as condemnation of the processes that are ascribed to it.
Similarly, Hardin's use of "commons" has frequently been misunderstood, leading Hardin to later remark that he
should have titled his work "The Tragedy of the Unregulated Commons".[19]
The tragedy of the commons has particular relevance in analyzing behavior in the fields of economics, evolutionary
psychology, game theory, politics, taxation, and sociology. Some also see it as an example of emergent behavior,
with the "tragedy" the outcome of individual interactions in a complex system.

Criticism
Hardin's essay has been widely criticized. Public policy experts have argued that Hardin's account of the breakdown
of common grazing land was inaccurate, and that such commons were effectively managed to prevent
overgrazing.[20] Referring to Hardin's crucial passage on page 1244,17 Partha Dasgupta, for example, comments that
"it is difficult to find a passage of comparable length and fame that contains so many errors as the one quoted."[21]
More significantly, criticism has been fueled by the application of Hardin's ideas to current policy issues. In
particular, some authorities have read Hardin's work as specifically advocating the privatization of commonly owned
resources. Consequently, resources that have traditionally been managed communally by local organizations have
been enclosed or privatized. Ostensibly, this serves to protect such resources, but it ignores the pre-existing
management, often appropriating resources and alienating indigenous (and frequently poor) populations. In effect,
private or state use may result in worse outcomes than the previous management of commons.[22]
Some of this controversy stems from disagreement over whether individuals will always behave in the selfish
fashion posited by Hardin. Others[23] have argued that even self-interested individuals will often find ways to
cooperate, because collective restraint serves both the collective and individual interests. Hardin's piece has also been
criticised as promoting the interests of Western economic ideology. G. N. Appell, an anthropologist, states: "Hardin's
claim has been embraced as a sacred text by scholars and professionals in the practice of designing futures for others
and imposing their own economic and environmental rationality on other social systems of which they have
Tragedy of the commons 226

incomplete understanding and knowledge."[24]


Hardin's advocacy of clearly defined property rights has frequently been used as an argument for privatization, or
private property, per se. The opposite situation to a tragedy of the commons is sometimes referred to as a tragedy of
the anticommons: a situation in which rational individuals (acting separately) collectively waste a given resource by
under-utilizing it.

Application

Modern commons
The tragedy of the commons can be considered in relation to environmental issues such as sustainability. The
commons dilemma stands as a model for a great variety of resource problems in society today, such as water, land,
fish, and non-renewable energy sources such as oil and coal.
Situations exemplifying the "tragedy of the commons" include the overfishing and destruction of the Grand Banks,
the destruction of salmon runs on rivers that have been dammed – most prominently in modern times on the
Columbia River in the Northwest United States, and historically in North Atlantic rivers – the devastation of the
sturgeon fishery – in modern Russia, but historically in the United States as well – and, in terms of water supply, the
limited water available in arid regions (e.g., the area of the Aral Sea) and the Los Angeles water system supply,
especially at Mono Lake and Owens Lake.
Other situations exemplifying the "tragedy of the commons" include pollution caused by driving cars. There are
many negative externalities of driving; these include congestion, carbon emissions, and traffic accidents. For
example, every time 'Person A' gets in a car, it becomes more likely that 'Person Z' – and millions of others – will
suffer in each of those areas.[25]
More general examples (some alluded to by Hardin) of potential and actual tragedies include:
• Planet Earth ecology
• Uncontrolled human population growth leading to
overpopulation.[1]
• Air, whether ambient air polluted by industrial emissions and
cars among other sources of air pollution, or indoor air.
• Water – Water pollution, Water crisis of over-extraction of
groundwater and wasting water due to overirrigation[26]
• Forests – Frontier logging of old growth forest and slash and
burn[27] Clearing rainforest for agriculture in southern
• Energy resources and climate – Burning of fossil fuels and Mexico.
consequential global warming
• Animals – Habitat destruction and poaching leading to the Holocene mass extinction[28]
• Oceans – Overfishing[29] [30]
• Publicly shared resources
• Radio frequencies – Unlicensed frequencies used for wireless communications, especially 802.11 a/b/g in the
U.S., detailed under Part 15 (FCC rules) would be vulnerable to the overuse of high power transmitters,
especially overdriven transmitters with dirty signal profiles, and especially when combined with
omnidirectional antennas, had the FCC not mandated maximum transmission power for each class of device
and limitations on their spectral profile.
• Spam email degrades the usefulness of the email system and increases the cost for all users of the Internet
while providing a benefit to only a tiny number of individuals.
• Vandalism and littering in public spaces such as Public restrooms, parks and recreation areas.
Tragedy of the commons 227

• Knowledge commons encompass immaterial and collectively owned goods in the information age.
• Freeways experience heavy traffic due to overuse

Modern solutions
Articulating solutions to the tragedy of the commons is one of the main problems of political philosophy. In absence
of enlightened self-interest, some form of authority or federation is needed to solve the collective action problem. In
a typical example, governmental regulations can limit the amount of a common good available for use by any
individual. Permit systems for extractive economic activities including mining, fishing, hunting, livestock raising and
timber extraction are examples of this approach. Similarly, limits to pollution are examples of governmental
intervention on behalf of the commons. Alternatively, resource users themselves can cooperate to conserve the
resource in the name of mutual benefit.
Another solution for certain resources is to convert common good into private property, giving the new owner an
incentive to enforce its sustainability. Effectively, this is what took place in the English Inclosure Acts. Increasingly,
many agrarian studies scholars advocate studying traditional commons management systems to understand how
common resources can be protected without alienating those whose livelihoods depend upon them.
An opposing idea, used by the United Nations Moon Treaty, Outer Space Treaty and Law of the Sea Treaty as well
as the UNESCO World Heritage Convention involves the international law principle that designates certain areas or
resources the Common Heritage of Mankind.[31]
Libertarians and classical liberals often cite the tragedy of the commons as an example of what happens when
Lockean property rights to homestead resources are prohibited by a government.[32] [33] [34] These people argue that
the solution to the tragedy of the commons is to allow individuals to take over the property rights of a resource, that
is, privatizing it.[35] In 1940 Ludwig von Mises wrote concerning the problem:
If land is not owned by anybody, although legal formalism may call it public property, it is used without
any regard to the disadvantages resulting. Those who are in a position to appropriate to themselves the
returns — lumber and game of the forests, fish of the water areas, and mineral deposits of the subsoil —
do not bother about the later effects of their mode of exploitation. For them, erosion of the soil,
depletion of the exhaustible resources and other impairments of the future utilization are external costs
not entering into their calculation of input and output. They cut down trees without any regard for fresh
shoots or reforestation. In hunting and fishing, they do not shrink from methods preventing the
repopulation of the hunting and fishing grounds.[36]
An objection to the privatization approach is that many commons (such as the ozone layer or global fish populations)
would be extremely difficult or impossible to privatize.
Psychologist Dennis Fox used a number, what is now termed "Dunbar's number", to take a new look at the tragedy
of the commons. In a 1985 paper titled "Psychology, Ideology, Utopia, & the Commons" [37], he stated "Edney
(1980, 1981a) also argued that long-term solutions will require, among a number of other approaches, breaking down
commons into smaller segments. He reviewed experimental data showing that cooperative behavior is indeed more
common in smaller groups. After estimating that "the upper limit for a simple, self-contained, sustaining,
well-functioning commons [sic] may be as low as 150 people" (1981a, p. 27).
Costa Rica has successfully advanced the growth of its ecotourism business by taking account of, and pricing for, the
environmental business services consumed by pollution.[38] The Coast Salish managed their natural resources in a
place-based system in which families were responsible for looking after a place and its resources.[39] Access to food
was the major source of wealth and the empowerment of generosity was highly valued, so it made sense for them to
take care of the resources.
The "Coasian" solution to the problem is also a popular one, whereby the people formerly using the common each
gain their own individual part of it instead — so it is no longer a common — and do not have to support one another
Tragedy of the commons 228

so as not to deplete the resource.


In Hardin's essay, he proposed that the solution to the problem of overpopulation must be based on "mutual coercion,
mutually agreed upon" and result in "relinquishing the freedom to breed". Hardin discussed this topic further in a
1979 book, Managing the Commons, co-written with John A. Baden.[40] He framed this prescription in terms of
needing to restrict the "reproductive right" in order to safeguard all other rights. Only one large country has adopted
this policy, the People's Republic of China. In the essay, Hardin had rejected education as an effective means of
stemming population growth. Since that time, it has been shown that increased educational and economic
opportunities for women correlates well with reduced birthrates in most countries, as does economic growth in
general. However, given the nature of the problem as a limit to a given common resource, economic growth resulting
in a higher per capita use of the resource may more than offset the decreased population growth's effect on total
resource consumption. Note, however, that this now becomes a problem of economic expectations of a given
population, and the problem of birth regulation appears to be eliminated.

Application to evolutionary biology


The tragedy of the commons is referred to in studies of evolutionary biology, social evolution, sociobiology and
behavioral ecology. A tragedy of the commons is brought about by selfish individuals whose genes for selfish
behaviour would therefore come to predominate, so the metaphor cannot explain how altruism arises. This question
is addressed instead by models of possible mechanisms that can give rise to "reciprocal altruism," leading to ideas
like the "tit for tat" rule (reciprocation). These models freed evolutionary theory from the limitations imposed by the
concept of "inclusive fitness," a previous explanation for altruism, which proposed that organisms help others only to
the extent that by doing so they increase the probability of passing shared genes to the next generation.
A parallel was drawn recently between the tragedy of the commons and the competing behaviour of parasites that
through acting selfishly eventually diminish or destroy their common host.[41]
The idea has also been applied to areas such as the evolution of virulence or sexual conflict, where males may fatally
harm females when competing for matings.[42] It is also raised as a question in studies of social insects, where
scientists wish to understand why insect workers do not undermine the "common good" by laying eggs of their own
and causing a breakdown of the society.
The idea of evolutionary suicide, where adaptation at the level of the individual causes the whole species or
population to be driven extinct, can be seen as an extreme form of an evolutionary tragedy of the commons.[43] [44]

The commons dilemma


The commons dilemma is a specific class of social dilemma in which people's short-term selfish interests are at odds
with long-term group interests and the common good. In academia, a range of related terminology has also been used
as shorthand for the theory or aspects of it, including resource dilemma, take-some dilemma, and common pool
resource.
Commons dilemma researchers have studied conditions under which groups and communities are likely to under- or
over-harvest common resources in both the laboratory and field. Research programs have concentrated on a number
of motivational, strategic, and structural factors that might be conducive to management of commons.
In game theory, which constructs mathematical models for individuals' behavior in strategic situations, the
corresponding "game", developed by the ecologist Garrett Hardin, is known as the Commonize Costs — Privatize
Profits Game (CC–PP game).
Tragedy of the commons 229

Strategic factors
Strategic factors also matter in commons dilemmas. One often-studied strategic factor is the order in which people
take harvests from the resource. In simultaneous play, all people harvest at the same time, whereas in sequential play
people harvest from the pool according to a predetermined sequence — first, second, third, etc. There is a clear order
effect in the latter games: the harvests of those who come first — the leaders — are higher than the harvest of those
coming later — the followers. The interpretation of this effect is that the first players feel entitled to take more. With
sequential play, individuals adopt a first come-first served rule, whereas with simultaneous play people may adopt an
equality rule. Another strategic factor is the ability to build up reputations. Research found that people take less from
the common pool in public situations than in anonymous private situations. Moreover, those who harvest less gain
greater prestige and influence within their group.

Structural factors
Much research has focused on when and why people would like to structurally rearrange the commons to prevent a
tragedy. Hardin stated in his analysis of the tragedy of the commons that "Freedom in a commons brings ruin to
all."[45] One of the proposed solutions is to appoint a leader to regulate access to the common. Groups are more
likely to endorse a leader when a common resource is being depleted and when managing a common resource is
perceived as a difficult task. Groups prefer leaders who are elected, democratic, and prototypical of the group, and
these leader types are more successful in enforcing cooperation. There is a general aversion to autocratic leadership,
although it may be an effective solution, possibly because of the fear of power abuse and corruption.
The provision of rewards and punishments may also be effective in preserving common resources. Selective
punishments for overuse can be effective in promoting domestic water and energy conservation — for example,
through installing water and electricity meters in houses. Selective rewards work, provided that they are open to
everyone. An experimental carpool lane in the Netherlands failed because car commuters did not feel they were able
to organize a carpool. The rewards do not have to be tangible. In Canada there is a movement to put "smiley faces"
on electricity bills if you are below the average for your class.[46] Much field research on commons dilemmas has
combined solutions obtained in experimental research. Elinor Ostrom, who was awarded 2009's Nobel Prize of
Economics for her work on the issue, and her colleagues looked at how real-world communities manage communal
resources, such as fisheries, land irrigation systems, and farmlands, and they identified a number of factors
conducive to successful resource management. One factor is the resource itself; resources with definable boundaries
(e.g., land) can be preserved much more easily. A second factor is resource dependence; there must be a perceptible
threat of resource depletion, and it must be difficult to find substitutes. The third is the presence of a community;
small and stable populations with a thick social network and social norms promoting conservation do better.[47] A
final condition is that there be appropriate community-based rules and procedures in place with built-in incentives
for responsible use and punishments for overuse.

Notes
[1] "The Tragedy of the Commons". Science 162 (3859): 1243–1248. 1968. doi:10.1126/science.162.3859.1243. PMID 5699198. Also available
here (http:/ / www. sciencemag. org/ cgi/ reprint/ 162/ 3859/ 1243. pdf) and here. (http:/ / www. garretthardinsociety. org/ articles/
art_tragedy_of_the_commons. html)
[2] "Socialism and the Tragedy of the Commons: Reflections on Environmental Practice in the Soviet Union and Russia" (http:/ / jed. sagepub.
com/ content/ 4/ 1/ 77. abstract). The Journal of Environment Development. January 1995 vol. 4 no. 1 77-110.
[3] Perry, Mark (June 1995 • Volume: 45 • Issue: 6). "Why Socialism Failed" (http:/ / www. thefreemanonline. org/ featured/
why-socialism-failed/ ). The Freeman.
[4] Radkau, Joachim. Nature and Power. A Global History of the Environment. Cambridge University Press 2008
[5] Tierney, John (2009-10-15). "The Non-Tragedy of the Commons" (http:/ / tierneylab. blogs. nytimes. com/ 2009/ 10/ 15/
the-non-tragedy-of-the-commons/ ).
[6] Riksbank Prize 2009 summary (http:/ / nobelprize. org/ nobel_prizes/ economics/ laureates/ 2009/ ecoadv09. pdf)
[7] "Ostrom 'revisits the commons' in 'Science'" (http:/ / www. iuinfo. indiana. edu/ HomePages/ 041699/ text/ ostrom. htm). .
[8] Ecologist, The, 1992. Whose Common Future?. Special Issue. The Ecologist. 22(4), 121-210
Tragedy of the commons 230

[9] Ciriacy-Wantrup S.V., Bishop R.C., 1975. "Common Property" as a Concept in Natural Resources Policy. Nat. Res. J. 15, 713-727
[10] Thucydides (ca. 460 B.C.-ca. 395 B.C.), History of the Peloponnesian War, Book I, Sec. 141; translated by Richard Crawley (London: J. M.
Dent & Sons; New York: E. P. Dutton & Co., 1910).
[11] Aristotle (384 B.C.-322 B.C.), Politics, Book II, Chapter III, 1261b; translated by Benjamin Jowett as The Politics of Aristotle: Translated
into English with Introduction, Marginal Analysis, Essays, Notes and Indices (http:/ / oll. libertyfund. org/ Texts/ Aristotle0039/ Politics/
HTMLs/ 0033-01_Pt02_Books1-4. html) (Oxford: Clarendon Press, 1885), Vol. 1 of 2. See also here (http:/ / oll. libertyfund. org/ ToC/ 0033.
php), here (http:/ / classics. mit. edu/ Aristotle/ politics. html), here (http:/ / etext. library. adelaide. edu. au/ a/ aristotle/ a8po/ book2. html) or
here (http:/ / socserv2. mcmaster. ca/ ~econ/ ugcm/ 3ll3/ aristotle/ Politics. pdf).
[12] William Forster Lloyd, Two Lectures on the Checks to Population (Oxford, England: Oxford University Press, 1833).
[13] Loewen, James. Lies Across America: What Our Historic Sites Get Wrong. New York: The New Press, 1999. p. 414 ISBN 0965003172
[14] Hardin, G. (1968-12-13). "The Tragedy of the Commons Freedom To Breed Is Intolerable" (http:/ / www. sciencemag. org/ content/ 162/
3859/ 1243. full. pdf). Science (AAAS) 162 (3859): 1243–1248. doi:10.1126/science.162.3859.1243. PMID 5699198. . Retrieved 2011-09-04.
"it is the role of education to reveal to all the necessity of abandoning the freedom to breed. Only so, can we put an end to this aspect of the
tragedy of the commons."
[15] "The Universal Declaration of Human Rights" (http:/ / www. un. org/ en/ documents/ udhr/ ). 10 December 1948. . Retrieved 4 September
2011.
[16] United Nations. Dept. of Economic and Social Affairs. Population Division (2004). Levels and trends of contraceptive use as assessed in
2002. United Nations Publications. p. 126. ISBN 92-1-151399-5. "some have argued that it may be inferred from the rights to privacy,
conscience, health and well-being set forth in various United Nation's conventions [...] Parents have a basic human right to determine freely
and responsibly the number and spacing of their children (United Nations, 1968)"
[17] Samuel Bowles: Microeconomics: Behavior, Institutions, and Evolution, Princeton University Press, pp. 27–29 (2004) ISBN 0-691-09163-3
[18] Brigham Daniels, Emerging Commons Tragic Institutions | Environmental Law | Vol. 37 (2007), pp. 515-571 at 536 (http:/ / papers. ssrn.
com/ sol3/ papers. cfm?abstract_id=1227745)
[19] Will commons sense dawn again in time? | The Japan Times Online (http:/ / search. japantimes. co. jp/ cgi-bin/ fe20060726sh. html)
[20] "SpringerLink — Journal Article" (http:/ / www. springerlink. com/ content/ wm68g57188j282u4/ ). .
[21] Dasgupta, Partha. "Human Well-Being and the Natural Environment" (http:/ / www. econ. cam. ac. uk/ faculty/ dasgupta/ ). .
[22] Ostrom, Elinor, Joanna Burger, Christopher B. Field, Richard B. Norgaard, and David Policansky (1999): Revisiting the Commons: Local
Lessons, Global Challenges, in: Science, Vol. 284, 9 April, pp. 278-282.
[23] Axelrod, Robert. (1984). The Evolution of Cooperation. New York: Basic Books, ISBN 0-465-02121-2
[24] Appell, G. N. (1993). Hardin's Myth of the Commons: The Tragedy of Conceptual Confusions (http:/ / dlc. dlib. indiana. edu/ dlc/ handle/
10535/ 4532). Working Paper 8. Phillips, ME: Social Transformation and Adaptation Research Institute.
[25] Stephen J. Dunber and Steven D. Levitt "Not-So-Free-Ride" (http:/ / www. nytimes. com/ 2008/ 04/ 20/ magazine/ 20wwln-freakonomics-t.
html?_r=1& pagewanted=1& oref=slogin) The New York Times
[26] I.A. Shiklomanov, Appraisal and Assessment of World Water Resources, Water International 25(1): 11-32 (2000)
[27] * Wilson, E.O., 2002, The Future of Life, Vintage ISBN 0-679-76811-4
[28] Leakey, Richard and Roger Lewin, 1996, The Sixth Extinction : Patterns of Life and the Future of Humankind, Anchor, ISBN
0-385-46809-1
[29] C.Michael Hogan. 2010. Overfishing. Encyclopedia of Earth. National Council for Science and the Environment (http:/ / www. eoearth. org/
article/ Overfishing). eds. Sidney Draggan and C.Cleveland. Washington DC.
[30] ch 11-12. Mark Kurlansky, 1997. Cod: A Biography of the Fish That Changed the World, New York: Walker, ISBN 0-8027-1326-2.
[31] Jennifer Frakes, The Common Heritage of Mankind Principle and the Deep Seabed, Outer Space, and Antarctica: Will Developed and
Developing Nations Reach a Compromise? Wisconsin International Law Journal. 2003; 21:409
[32] Robert J. Smith, "Resolving the Tragedy of the Commons by Creating Private Property Rights in Wildlife" (http:/ / www. cato. org/ pubs/
journal/ cj1n2-1. html), Cato Journal, Vol. 1, No. 2 (Fall 1981), pp. 439-468.
[33] Murray N. Rothbard, "Law, Property Rights, and Air Pollution" (http:/ / www. cato. org/ pubs/ journal/ cj2n1/ cj2n1-2. pdf), Cato Journal,
Vol. 2, No. 1 (Spring 1982), pp. 55-100. Also available here. (http:/ / www. mises. org/ rothbard/ lawproperty. pdf)
[34] "Free-Market Environmentalism Reading List" (http:/ / commonsblog. org/ free_reading. php), The Commons Blog.
[35] John Locke, "Sect. 27" and following sections in Second Treatise of Government (http:/ / oregonstate. edu/ instruct/ phl302/ texts/ locke/
locke2/ locke2nd-a. html#Sect. 27. ) (1690). Also available here. (http:/ / www. gutenberg. org/ etext/ 7370)
[36] Ludwig von Mises, Part IV, Chapter 10, Sec. VI, Nationalökonomie: Theorie des Handelns und Wirtschaftens (http:/ / www. mises. org/
humanaction/ pdf/ nationaloekonomie. pdf) (Geneva: Editions Union, 1940). The quote provided is that of Mises's expanded English
translation, Chapter XXIII: "The Data of the Market", Sec. 6: "The Limits of Property Rights and the Problems of External Costs and External
Economies" (http:/ / www. mises. org/ humanaction/ chap23sec6. asp), Human Action: A Treatise on Economics (New Haven: Yale
University Press, 1949). Also available here. (http:/ / www. mises. org/ humanaction/ pdf/ HumanActionScholars. pdf)
[37] http:/ / www. dennisfox. net/ papers/ commons. html
[38] THOMAS L. FRIEDMAN (No) Drill, Baby, Drill New York Times Op-Ed Column Published: April 11, 2009 http:/ / www. nytimes. com/
2009/ 04/ 12/ opinion/ 12friedman. html?em
[39] http:/ / www. coastalrevelations. com/ images/ news/ Traditional_Ecological_Knowledge. pdf
[40] Managing the Commons by Garrett Hardin and John Baden (http:/ / www. ecobooks. com/ books/ commons. htm)
Tragedy of the commons 231

[41] The tragedy of the commons, the public goods dilemma, and the meaning of rivalry and excludability in evolutionary biology (http:/ / eao.
igc. gulbenkian. pt/ ENS/ dionisio_evol_econ_rivalry_excludability. pdf) Francisco Dionisio and Isabel Gordo Evolutionary Ecology
Research 2006
[42] Sex, death and tragedy (http:/ / www. socialgenes. org/ publications/ Pub_TREE1. pdf) Daniel J. Rankin and Hanna Kokko Laboratory of
Ecological and Evolutionary Dynamics May 2006
[43] Can adaptation lead to extinction? (http:/ / www. socialgenes. org/ publications/ Pub_Oikos1. pdf) Rankin, D.J. & López-Sepulcre, A.
(2005) Oikos 111: 616-619
[44] The tragedy of the commons in evolutionary biology (http:/ / www. socialgenes. org/ publications/ Pub_TREE2. pdf) Rankin, D.J., Bargum,
K. & Kokko, H. (2007) Trends in Ecology and Evolution 22: 643-65
[45] Hardin, 1244
[46] (http:/ / unambig. com/ a-smiley-face-emoticon-for-your-electric-bill/ )
[47] Elinor Ostrom: Beyond the tragedy of commons (http:/ / www. youtube. com/ watch?v=ByXM47Ri1Kc). Stockholm whiteboard seminars.
(Video, 8:26 min.)

References
• Angus, I. (2008). The myth of the tragedy of the commons (http://www.socialistvoice.ca/?p=316), Socialist
Voice (August 24).
• Foddy, M., Smithson, M., Schneider, S., & Hogg, M. (1999). Resolving social dilemmas. Philadelphia, PA:
Psychology Press.
• Messick, D. M., Wilke, H. A. M., Brewer, M. B., Kramer, R. M., Zemke, P. E., & Lui, L. (1983). Individual
adaptations and structural change as solutions to social dilemmas. Journal of Personality and Social Psychology,
44, 294 309.
• Ostrom, E. (1990). Governing the commons: The evolution of institutions for collective action. Cambridge:
Cambridge University Press.
• Van Vugt, M. (2001). Community identification moderating the impact of financial incentives in a natural social
dilemma. Personality and Social Psychology Bulletin, 27, 1440-1449.
• Van Vugt, M. (2009). Triumph of the commons: Helping the world to share (http://www.professormarkvanvugt.
com/index.php/publications/27-applications/101-triumph-of-the-commons), New Scientist (2722), 40-43.
• Van Vugt, M., Van Lange, P. A. M., Meertens, R. M. and Joireman, J. A. (1996). Why structural solutions to
social dilemmas might fail: A field experiment on the first carpool priority lane in Europe. Social Psychology
Quarterly, 59, 364-374.

Further reading
• Hardin, G. (1994). "The Tragedy of the Unmanaged Commons". Trends in Ecology & Evolution 9 (5): 199.
doi:10.1016/0169-5347(94)90097-3.
• Ostrom, E. (1990). Governing the commons: The evolution of institutions for collective action. Cambridge:
Cambridge University Press.
• Thinking Strategically: The Competitive Edge in Business, Politics, and Everyday Life (http://www.amazon.
com/dp/0393310353) by Avinash Dixit and Barry Nalebuff
Tragedy of the commons 232

External links
• Original article by Garrett Hardin (http://www.sciencemag.org/cgi/content/full/162/3859/1243) from
Science (journal)
• The Digital Library of the Commons (http://dlc.dlib.indiana.edu/)
• The International Association for the Study of the Commons (IASC) (http://www.iascp.org/)
• The Myth of the Tragedy of the Commons (http://mrzine.monthlyreview.org/2008/angus250808.html) by Ian
Angus
• Global Tragedy of the Commons (http://www.greens.org/s-r/24/24-26.html) by John Hickman and Sarah
Bartlett
• Tragedy of the Commons Explained with Smurfs (http://www.scq.ubc.ca/
tragedy-of-the-commons-explained-with-smurfs/) by Ryan Somma

Tyranny of small decisions


The tyranny of small decisions refers to a phenomenon explored in an essay by that name, published in 1966 by the
American economist Alfred E. Kahn.[1] The article describes a situation where a number of decisions, individually
small in size and time perspective, cumulatively result in an outcome which is not optimal or desired. It is a situation
where a series of small, individually rational decisions can negatively change the context of subsequent choices,
even to the point where desired alternatives are irreversibly destroyed. Kahn described the problem as a common
issue in market economics which can lead to market failure.[1] The concept has since been extended to areas other
than economic ones, such as environmental degradation,[2] political elections[3] and health outcomes.[4]
A classic example of the tyranny of small decisions is the tragedy of the commons, described by Garrett Hardin in
1968[5] as a situation where a number of herders graze cows on a commons. The herders each act independently in
what they perceive to be their own rational self-interest, ultimately depleting their shared limited resource, even
though it is clear that it is not in any herder's long-term interest for this to happen.[6]

The Ithaca railroad


The event that first suggested the tyranny of small
decisions to Kahn was the withdrawal of passenger
railway services in Ithaca, New York. The railway was
the only reliable way to get in and out of Ithaca. It
provided services regardless of conditions, in fair weather
and foul, during peak seasons and off-peak seasons. The
local airline and bus company skimmed the traffic when
conditions were favourable, leaving the trains to fill in
when conditions were difficult. The railway service was
eventually withdrawn, because the collective individual
decisions made by travellers did not provide the railway
with the revenue it needed to cover its incremental costs.
According to Kahn, this suggests a hypothetical economic
test of whether the service should have been withdrawn.

Abutment of the Ithaca-Auburn Short Line bridge


Tyranny of small decisions 233

Suppose each person in the cities served were to ask himself how much he would have been willing to
pledge regularly over some time period, say annually, by purchase of prepaid tickets, to keep rail
passenger service available to his community. As long as the amount that he would have declared (to
himself) would have exceeded what he actually paid on the period–and my own introspective
experiment shows that it would–then to that extent the disappearance of the passenger service was an
incident of market failure.[7]
The failure to reflect the full value to passengers of keeping the railroad service available had its origins in the
discrepancy between the time perception within which the travellers were operating, and the time perception within
which the railroad was operating. The travellers were making many short term decisions, deciding each particular
trip whether to go by the railroad, or whether to go instead by car, bus or the local airline. Based on the cumulative
effects of these small decisions, the railroad was making one major long run decision, "virtually all-or-nothing and
once-and-for-all"; whether to retain or abandon its passenger service. Taken one at a time, each small travel decision
made individually by the travellers had a negligible impact on the survivability of the railroad. It would not have
been rational for a traveller to consider the survival of the railroad imperilled by any one of his particular
decisions.[7]
The fact remains that each selection of x over y constitutes also a vote for eliminating the possibility
thereafter of choosing y. If enough people vote for x, each time necessarily on the assumption that y will
continue to be available, y may in fact disappear. And its disappearance may constitute a genuine
deprivation, which customers might willingly have paid something to avoid. The only choice the market
offered travellers to influence the longer-run decision of the railroad was thus shorter in its time
perspective, and the sum total of our individual purchases of railroad tickets necessary added up to a
smaller amount, than our actual combined interest in the continued availability of rail service. We were
victims of the "tyranny of small decisions".[7]

Earlier references to the idea


Thucydides (ca. 460 BC-ca. 395 BC) stated:
[T]hey devote a very small fraction of time to the consideration of any public object, most of it to the
prosecution of their own objects. Meanwhile each fancies that no harm will come to his neglect, that it is
the business of somebody else to look after this or that for him; and so, by the same notion being
entertained by all separately, the common cause imperceptibly decays.[8]
Aristotle (384-322 BC) similarly argued against common goods of the polis of Athens:
For that which is common to the greatest number has the least care bestowed upon it. Every one thinks
chiefly of his own, hardly at all of the common interest; and only when he is himself concerned as an
individual. For besides other considerations, everybody is more inclined to neglect the duty which he
expects another to fulfill; as in families many attendants are often less useful than a few.[9]
Tyranny of small decisions 234

Environmental degradation
In 1982, the estuarine ecologist, William Odum, published a paper
where he extended the notion of the tyranny of small decisions to
environmental issues. According to Odum, "much of the current
confusion and distress surrounding environmental issues can be traced
to decisions that were never consciously made, but simply resulted
from a series of small decisions."[2]

Odum cites, as an example, the marshlands along the coasts of


Connecticut and Massachusetts. Between 1950 and 1970, almost 50
As a result of many small decisions, and without
percent of these marshlands were destroyed. This was not purposely
the issue being directly addressed, nearly half the planned, and the public may well have supported preservation had they
marshlands were destroyed along the coasts of been asked. Instead, hundreds of small tracts of marshland were
Connecticut and Massachusetts
converted to other purposes through hundreds of small decisions,
resulting in a major outcome without the overall issue ever being
directly addressed.[2]

Another example is the Florida Everglades. These have been threatened, not by a single unfavorable decision, but by
many independent pin prick decisions, such as decisions to add this well, that drainage canal, one more retirement
village, another roadway... No explicit decision was made to restrict the flow of surface water into the glades, or to
encourage hot, destructive fires and intensify droughts, yet this has been the outcome.[2]
With few exceptions, threatened and endangered species owe their predicament to series of small decisions. Polar
bears, humpback whales and bald eagles have suffered from the cumulative effects of single decisions to overexploit
or convert habitats. The removal, one by one, of green turtle nesting beaches for other uses parallels the decline in
green turtle populations.[2]
Cultural lake eutrophication is rarely the result of an intentional decision. Instead, lakes eutrophy gradually as a
cumulative effect of small decisions; the addition of this domestic sewage outfall and then that industrial outfall,
with a runoff that increases steadily as this housing development is added, then that highway and some more
agricultural fields.[2] The insidious effects of small decisions marches on; productive land turns to desert,
groundwater resources are overexploited to the point where they can't recover, persistent pesticides are used and
tropical forests are cleared without factoring in the cumulative consequences.[2]

Counters
Considering all of the pressures and short-term rewards that guide society toward simple solutions, it seems safe to assume that the
[2]
"tyranny of small decisions" will be an integral part of environmental policy for a long time to come. – William Odum

An obvious counter to the tyranny of small decisions is to develop and protect appropriate upper levels of decision
making. Depending on the issue, decision making may be appropriate at a local, state, country or global level.
However, organisations at these levels can entangle themselves in their own bureaucracy and politics, assigning
decisions by default back to the lower levels. Political and scientific systems can encourage small decisions by
rewarding specific problems and solutions. It is usually easier and more politic to make decision on individual tracts
of land or single issues rather than implementing large scale policies. The same pattern applies with academic
science. Most scientists are more comfortable working on specific problems rather than systems. This reductionist
tendency towards the small problems is reinforced in the way grant monies and academic tenure are assigned.[2]
Odum advocates that at least some scientists should study systems so the negative consequences that result when
many small decisions are made from a limited perspective can be avoided. There is a similar need for politicians and
planners to understand large scale perspectives. Environmental science teachers should include large scale processes
Tyranny of small decisions 235

in their courses, with examples of the problems that decision making at inappropriate levels can introduce.[2]

Notes
[1] Kahn, Alfred E. (1966) "The tyranny of small decisions: market failures, imperfections, and the limits of economics" (http:/ / www3.
interscience. wiley. com/ journal/ 119726548/ abstract) Kvklos, 19:23-47.
[2] Odum WE (1982) "Environmental degradation and the tyranny of small decisions" (http:/ / links. jstor. org/
sici?sici=0006-3568(198210)32:9<728:EDATTO>2. 0. CO;2-6) BioScience, 32(9):728-729.
[3] Burnell, P (2002) "Zambia's 2001 Elections: the Tyranny of Small Decisions, Non-decisions and 'Not Decisions'" (http:/ / www. jstor. org/
pss/ 3993565) Third World Quarterly, 23(3): 1103-1120.
[4] *Bickel WK and Marsch LA (2000) "The Tyranny of Small Decisions: Origins, Outcomes, and Proposed Solutions" (http:/ / books. google.
co. nz/ books?id=uaVhbKrh2FkC& pg=PA341& lpg=PA341& dq="The+ Tyranny+ of+ Small+ Decisions:+ Origins,+ Outcomes,+ and+
Proposed+ Solutions"& source=bl& ots=z7nClIvLd6& sig=SH5m1U9t1adL5GB-P9lgC6wMOqs& hl=en&
ei=dll_S8fqLNCHkQX_04iNAQ& sa=X& oi=book_result& ct=result& resnum=1& ved=0CAYQ6AEwAA#v=onepage& q="The Tyranny of
Small Decisions: Origins, Outcomes, and Proposed Solutions"& f=false) Chapter 13 in Bickel WK and Vuchinich RE (2000) Reframing
health behavior change with behavioral economics, Routledge. ISBN 9780805827330.
[5] Garrett Hardin, "The Tragedy of the Commons" (http:/ / www. sciencemag. org/ cgi/ content/ full/ 162/ 3859/ 1243), Science, Vol. 162, No.
3859 (December 13, 1968), pp. 1243-1248. Also available here (http:/ / www. sciencemag. org/ cgi/ reprint/ 162/ 3859/ 1243. pdf) and here.
(http:/ / www. garretthardinsociety. org/ articles/ art_tragedy_of_the_commons. html)
[6] Baylis J, Wirtz JJ, Cohen EA and Gray CS (2007) Strategy in the contemporary world: an introduction to strategic studies (http:/ / books.
google. co. nz/ books?id=1akfkqhO_m0C& pg=PT368& lpg=PT368& dq="Tyranny+ of+ small+ decisions"+ "tragedy+ of+ the+
commons"& source=bl& ots=hu5UiZN-IM& sig=OaPxuPF9rduZXyaqFJWfVRNXdBQ& hl=en& ei=nbt7S5D-EMGTkAXo3aW8BA&
sa=X& oi=book_result& ct=result& resnum=4& ved=0CBEQ6AEwAw#v=onepage& q="Tyranny of small decisions" "tragedy of the
commons"& f=false) Page 368. Oxford University Press, ISBN 9780199289783
[7] Kahn AE (1988) The economics of regulation: principles and institutions (http:/ / books. google. co. nz/ books?id=x01ew7Emw0MC&
pg=RA1-PA237& lpg=RA1-PA237& dq="Kahn"+ "The+ tyranny+ of+ small+ decisions"& source=bl& ots=Mzhp2bdJFJ&
sig=YPQqdVYSM8q2zTvF5K-I9Zv4EKw& hl=en& ei=_uB5S5vsFJiekQWSmaj5Cg& sa=X& oi=book_result& ct=result& resnum=16&
ved=0CEwQ6AEwDw#v=onepage& q="Kahn" "The tyranny of small decisions"& f=false) Volume 1, pp 237–238. MIT Press. ISBN
9780262610520
[8] Thucydides (ca. 460 B.C.-ca. 395 B.C.), History of the Peloponnesian War, Book I, Sec. 141; translated by Richard Crawley (London: J. M.
Dent & Sons; New York: E. P. Dutton & Co., 1910).
[9] Aristotle (384 B.C.-322 B.C.), Politics, Book II, Chapter III, 1261b; translated by Benjamin Jowett as The Politics of Aristotle: Translated
into English with Introduction, Marginal Analysis, Essays, Notes and Indices (http:/ / oll. libertyfund. org/ Texts/ Aristotle0039/ Politics/
HTMLs/ 0033-01_Pt02_Books1-4. html) (Oxford: Clarendon Press, 1885), Vol. 1 of 2. See also here (http:/ / oll. libertyfund. org/ ToC/ 0033.
php), here (http:/ / classics. mit. edu/ Aristotle/ politics. html), here (http:/ / etext. library. adelaide. edu. au/ a/ aristotle/ a8po/ book2. html) or
here (http:/ / socserv2. mcmaster. ca/ ~econ/ ugcm/ 3ll3/ aristotle/ Politics. pdf).

References
• Haraldsson HV, Sverdrup HU, Belyazid S, Holmqvist J and Gramstad RCJ (2008) "The Tyranny of Small Steps:
a reoccurring behaviour in management" (http://findarticles.com/p/articles/mi_7349/is_1_25/ai_n32054024/
) Systems Research and Behavioral Science, Jan-Feb, by
All-pay auction 236

All-pay auction
In economics and game theory an all-pay auction, is an auction in which all bidders must pay regardless of whether
they win the prize, which is awarded to the highest bidder as in a conventional auction. The all-pay auction is often
used to model lobbying (bids are political contributions), or other competitions such as contests between animals.
The most straightforward form of an all-pay auction is a Tullock auction, sometimes called a Tullock lottery, in
which everyone submits a bid but both the losers and the winners pay their submitted bids. This is instrumental in
describing certain ideas in public choice economics. The dollar auction is a two player Tullock auction, or a
multiplayer game in which only the two highest bidders pay their bids.
A conventional lottery or raffle can also be seen as a related process, since all ticket-holders have paid but only one
gets the prize.
Other forms of all-pay auctions exist, such as the war of attrition, in which the highest bidder wins, but all (or both,
more typically) bidders pay only the lower bid. The war of attrition is used by biologists to model conventional
contests, or agonistic interactions resolved without recourse to physical aggression.
In an all-pay auction the Nash Equilibrium is such that each bidder plays a mixed strategy and their expected pay-off
is zero. The seller's expected revenue is equal to the value of the prize. However, some experiments have shown that
over-bidding is common. That is, the seller's revenue frequently exceeds that of the value of the prize, and in
repeated games even bidders that win the prize frequently will most likely make a loss in the long run.[1]
Commonplace practical examples of all-pay auctions can be found on several bidding fee auction websites.

References
[1] Gneezy and Smorodinsky (2006), All-pay auctions - An experimental study, Journal of Economic Behavior & Organization, Vol 61, pp.
255-275

External links
• Econ Talk podcast where economic professors discuss grants as an all-pay or Tullock auction. (http://www.
econtalk.org/archives/2006/06/giving_away_mon.html)
• What do we know about penny auctions? - Toomas Hinnosaar (http://toomas.hinnosaar.net/penny_slides.pdf)
• Penny Auctions - Toomas Hinnosaar (http://toomas.hinnosaar.net/pennyauctions.pdf)
List of games in game theory 237

List of games in game theory


Game theory studies strategic interaction between individuals in situations called games. Classes of these games
have been given names. This is a list of the most commonly studied games.

Explanation of features
Games can have several features, a few of the most common are listed here.
• Number of players: Each person who makes a choice in a game or who receives a payoff from the outcome of
those choices is a player.
• Strategies per player: In a game each player chooses from a set of possible actions, known as strategies. If the
number is the same for all players, it is listed here.
• Number of pure strategy Nash equilibria: A Nash equilibrium is a set of strategies which represents mutual
best responses to the other strategies. In other words, if every player is playing their part of a Nash equilibrium,
no player has an incentive to unilaterally change his or her strategy. Considering only situations where players
play a single strategy without randomizing (a pure strategy) a game can have any number of Nash equilibria.
• Sequential game: A game is sequential if one player performs her/his actions after another, otherwise the game is
a simultaneous move game.
• Perfect information: A game has perfect information if it is a sequential game and every player knows the
strategies chosen by the players who preceded them.
• Constant sum: A game is constant sum if the sum of the payoffs to every player are the same for every set of
strategies. In these games one player gains if and only if another player loses. A constant sum game can be
converted into a zero sum game by subtracting a fixed value from all payoffs, leaving their relative order
unchanged.

List of games
Game Players Strategies per Number of pure strategy Nash Sequential Perfect Zero
player equilibria information sum

Battle of the sexes 2 2 2 No No No

Blotto games 2 variable variable No No Yes

Cake cutting N, usually infinite [1] Yes Yes Yes


variable
2

Centipede game 2 variable 1 Yes Yes No

Chicken (aka hawk-dove) 2 2 2 No No No

Coordination game N variable >2 No No No

Cournot game 2 [2] 1 No No No


infinite

Deadlock 2 2 1 No No No

Dictator game 2 [2] 1 [3] [3] Yes


infinite N/A N/A

Diner's dilemma N 2 1 No No No

Dollar auction 2 2 0 Yes Yes No

El Farol bar N 2 variable No No No

Example of a game without a 2 infinite 0 No No Yes


value
List of games in game theory 238

Guess 2/3 of the average N infinite 1 No No [4]


Maybe

Kuhn poker 2 27 & 64 0 Yes No Yes

Matching pennies 2 2 0 No No Yes

Minority Game N 2 variable No No No

Nash bargaining game 2 [2] [2] No No No


infinite infinite

Peace war game N variable >2 Yes No No

Pirate game N [2] [2] Yes Yes No


infinite infinite

Prisoner's dilemma 2 2 1 No No No

Rock, Paper, Scissors 2 3 0 No No Yes

Screening game N variable variable Yes No No

Signaling game N variable variable Yes No No

Stag hunt 2 2 2 No No No

Traveler's dilemma 2 N >> 1 1 No No No

Trust game 2 infinite 1 Yes Yes No

Volunteer's dilemma N 2 2 No No No

War of attrition 2 2 0 No No No

Ultimatum game 2 [2] [2] Yes Yes No


infinite infinite

Princess and monster game 2 infinite 0 No No Yes

External Links
• List of games from gametheory.net [5]
• A visual index to common 2x2 games [6]

Notes
[1] For the cake cutting problem, there is a simple solution if the object to be divided is homogenous; one person cuts, the other choses who gets
which piece (continued for each player). With a non-homogenous object, such as a half chocolate/half vanilla cake or a patch of land with a
single source of water, the solutions are far more complex.
[2] There may be finite strategies depending on how goods are divisible.
[3] Since the dictator game only involves one player actually choosing a strategy (the other does nothing), it cannot really be classified as
sequential or perfect information.
[4] Potentially zero-sum, provided that the prize is split among all players who make an optimal guess. Otherwise non-zero sum.
[5] http:/ / www. gametheory. net/ Dictionary/ games/
[6] http:/ / www. lri. fr/ ~dragice/ gameicons/
List of games in game theory 239

References
• Arthur, W. Brian “Inductive Reasoning and Bounded Rationality”, American Economic Review (Papers and
Proceedings), 84,406-411, 1994.
• Bolton, Katok, Zwick 1998, "Dictator game giving: Rules of fairness versus acts of kindness" International
Journal of Game Theory, Volume 27, Number 2
• Gibbons, Robert (1992) A Primer in Game Theory, Harvester Wheatsheaf
• Glance, Huberman. (1994) "The dynamics of social dilemmas." Scientific American.
• H. W. Kuhn, Simplified Two-Person Poker; in H. W. Kuhn and A. W. Tucker (editors), Contributions to the
Theory of Games, volume 1, pages 97–103, Princeton University Press, 1950.
• Martin J. Osborne & Ariel Rubinstein: A Course in Game Theory (1994).
• McKelvey, R. and T. Palfrey (1992) "An experimental study of the centipede game," Econometrica 60(4),
803-836.
• Nash, John (1950) "The Bargaining Problem" Econometrica 18: 155-162.
• Ochs, J. and A.E. Roth (1989) "An Experimental Study of Sequential Bargaining" American Economic Review
79: 355-384.
• Rapoport, A. (1966) The game of chicken, American Behavioral Scientist 10: 10-14.
• Rasmussen, Eric: Games and Information, 2004
• Shor, Mikhael. "Battle of the sexes" (http://www.gametheory.net/dictionary/BattleoftheSexes.html).
GameTheory.net. Retrieved September 30, 2006.
• Shor, Mikhael. "Deadlock" (http://www.gametheory.net/dictionary/Games/Deadlock.html).
GameTheory.net. Retrieved September 30, 2006.
• Shor, Mikhael. "Matching Pennies" (http://www.gametheory.net/dictionary/Games/Matchingpennies.html).
GameTheory.net. Retrieved September 30, 2006.
• Shor, Mikhael. "Prisoner's Dilemma" (http://www.gametheory.net/dictionary/Prisonersdilemma.html).
GameTheory.net. Retrieved September 30, 2006.
• Shubik, Martin "The Dollar Auction Game: A Paradox in Noncooperative Behavior and Escalation," The Journal
of Conflict Resolution, 15, 1, 1971, 109-111.
• Sinervo, B., and Lively, C. (1996). "The Rock-Paper-Scissors Game and the evolution of alternative male
strategies". Nature Vol.380, pp. 240–243
• Skyrms, Brian. (2003) The stag hunt and Evolution of Social Structure Cambridge: Cambridge University Press.
Article Sources and Contributors 240

Article Sources and Contributors


Game theory  Source: http://en.wikipedia.org/w/index.php?oldid=465599606  Contributors: 63.208.190.xxx, 7&6=thirteen, A F K When Needed, A bit iffy, AMuseo, APH, Aaker, Aapo
Laitinen, Abigail-II, Abiyoyo, Abu badali, Acalamari, Acebrock, Acitrano, Action Jackson IV, AdamSmithee, Adashiel, Adoniscik, Adrian.benko, AdultSwim, Ahoerstemeier, Aim Here, Alakon,
Aliahmedraza, Aliekens, Amcfreely, Andonic, Andre Dahlke, Andrew Levine, Angus Lepper, AnneDrew55, Anonymous Dissident, Antandrus, Anthere, Anthon.Eff, AnthonyQBachler, Arbor,
Arjayay, Arvindn, Avaya1, AxelBoldt, B7T, Bdesham, Behavioralethics, BenFrantzDale, Billymac00, Bjankuloski06en, Bloodshedder, Blow of Light, Bluemoose, Bobblewik, Bogsat,
Bongwarrior, Bracton, Brad7777, Brendan Moody, Bret101x, Bridgeplayer, Brighterorange, BrokenSegue, C mon, CRGreathouse, CSTAR, Calabraxthis, Calton, Camrn86, Can't sleep, clown
will eat me, Canderson7, CardinalDan, Catgut, Cats and kittens, Cb6, Cdc, Chairboy, Charledl, Charles Matthews, Cheeseisafruit, Chesleya, Childhoodsend, Clay Juicer, CloudNine, Cometstyles,
Common Man, Conversion script, Cphay, Cretog8, Curps, Cybercobra, DVdm, Dameyawn, Damian Yerrick, Danger, Dank, Dave101, David Eppstein, David Haslam, David Levy, David Shay,
DavidCary, DavidScotson, Davidbod, Davidcarfi, Davidmayberry, Dcabrilo, Delirium, DerHexer, Dexter inside, Dgscott4, Dicklyon, Digitalme, Dionyziz, Discospinster, Diuturno, Dlgiffen,
DouglasGreen, DrDentz, DrJonesNelson, Dramaturgid, Drobinsonatlaur, Droll, Dromedary, Drpickem, Duncharris, Dureo, Dysprosia, EachyJ, Eags, Ebricca, EconoPhysicist, Economicprof,
EdJohnston, Edward, Edward321, El C, Electricbassguy, Elwikipedista, EmilJ, Encephalon, Enchanter, Enochlau, EnumaElish, Erianna, Escape Orbit, Espetkov, EvanYares, Everyking,
Everything counts, Ewlyahoocom, Faintly, Ferstel, FerzenR, Fioravante Patrone en, Firien, Fish and karate, Fixitrich, Flowerpotman, Fnielsen, Fortunecookie289, Francoisvn, Freegoods,
Furrykef, GHe, GabrielAPetrie, Gametheoryguy, Gametheoryme, Gary King, Gaurav1146, Gauss, Geometry guy, GeorgeLouis, Gfoley4, Giftlite, Gimboid13, Goarany, Gombang, Googl,
Graham87, Gregalton, Grick, Gronky, GroveGuy, Gtcs-student, Guyasleep, Hairy Dude, Hakperest, HappyCamper, Hardy Littlewood, Harryboyles, Haylon357, Hellkillz, Henrygb, Hoodam,
Hooverbag, Hoziron, Hqb, Hubriscantilever, Huliosancez, Huon, Hve, Hwansokcho, I203.129.46.242I, II MusLiM HyBRiD II, Ikanreed, Iluvcapra, Indego Prophecy, Infarom, Inkbacker,
Isdhbcfj, Isfisk, Isomorphic, Ivemadeahugemistake, J heisenberg, J.delanoy, JDspeeder1, JForget, Jacob Finn, Jakob.scholbach, JamesGecko, JavOs, Jaymay, Jecar, Jeekc, Jfpierce, Jhanley,
Jim1138, JimVC3, Jimmaths, Jitse Niesen, Jlairdpdx, Joe.Dirt, JoergenB, John Quiggin, John254, Jon Roland, Jonel, Jpers36, JuPitEer, JuanOso, Jumbuck, Justin W Smith, Jwdietrich2,
Kallimina, Kanmalachoa, Kenneth M Burke, Kesten, Khendon, Kiefer.Wolfowitz, KingOfBurgers, Kinos, Kirtag Hratiba, Kizor, Kku, Knowsetfree, Kntg, Kntrabssi, Koczy, Kukuliik,
Kungfuadam, Kymacpherson, Kzollman, La Pianista, Lakinekaki, Landroni, Larknoll, Larry Sanger, LeaveSleaves, LeiserJ, Leon math, Lesslame, Levineps, Lexor, Lgallindo, Liftarn, Lights,
Lihaas, LizardJr8, Lotje, LukeNukem, Lupin, Luqui, MER-C, MIT Trekkie, MLCommons, Machine Elf 1735, Magmi, Mandarax, MarcoLittel, Marenio, Mark Renier, Marqueed, Martin Kozák,
Marykateolson, Matgerke, Mathprog777, Mattisse, Matveims, Maury Markowitz, Mav, Mayumashu, McMorph23, Mcculley, Mdd4696, Meelar, Mentisock, MercyBreeze, Mfandersen, Michael
Hardy, Michel Schinz, Mike2vil, Mild Bill Hiccup, Mindmatrix, MisfitToys, Mitsuhirato, Mk270, Mk5384, Mkch, Monjur 99, Moonlight Poor, Morphh, Mousomer, Moxon, Mr. G. Williams,
Mr. Lefty, MrOllie, Msh210, Mstroeck, Mtappata, Muchness, Muinchille1, Myasuda, N Shar, NagySzondi, Nakon, NawlinWiki, Ncusa367, Neild, Nelsonheber, Nemnkim, Nethgirb, Nicemr,
Nickj, No Free Lunch, NoEdward, Nonempty, Not sure if want, Octoberhero, Oddity-, Ohanian, OleMaster, Oleg Alexandrov, Olivier, Omer182, Oravec, Otto ter Haar, PAC2, PDTantisocial,
PL290, Pamri, Pap232323, Paradoxsociety, Paste, Paul August, Pcarbonn, Pcm.nitdgp, Pedant17, Pete.Hurd, Pgk, Phila 033, Phirazo, PhnomPencil, Phronetic, Piels, Piramidon, Pontus, Portalian,
Pstarbuck, Psyoptix, Quantanew, Quasistellar, Qwyrxian, Qxz, R Lowry, R3m0t, RCSB, RHB, Rajasekaran Deepak, Randomblue, Ranger2006, Raul654, Rebroad, Remember the dot,
ResearchRave, Reseng, RexNL, Rhartness, RichardF, Rickeyjay, Rickyp, Rinconsoleao, Rjwilmsi, RobertG, Robertson-Glasgow, Robinh, Roger657, Roisterer, RomualdoGrillo, Rory096,
RoyBoy, Rubmum, RunOrDie, Ruud Binnekamp, S, SEWilco, SGBailey, SOFTBOOK, SQGibbon, Salix alba, Sam Hocevar, Sam mishra, Samohyl Jan, SandyGeorgia, Sarah McEwan, Saric,
Sceptre, Scorpion451, Scottw20, Sebastark, Selch, Sen dp, Sengkang, Serketan, Sfahey, Shadowjams, Shadowmuse, ShakespeareFan00, ShaunMacPherson, Shizhao, Shuroo, Silverfish,
Simpsnut14, Sirex98, Sitar Physics, Skomorokh, Sleeping123, Slrubenstein, Slyguy, SmartGuy, Smimram, Smmurphy, Snoyes, Socrtwo, Sole Soul, Someguy1221, Somesh, Somnambulon,
Sonicology, Sottolacqua, Spencerk, Steel, Stephenb, Steverapaport, SunCreator, Sundar, Surachit, Surgo, Swerty, Symane, T@Di, Tamaratrouts, Tarif Ezaz, Tarotcards, Tassedethe, Tazmaniacs,
Tcascardo, TeamEby1, TeenFreak07, TehGrauniad, Template namespace initialisation script, Terry Longbaugh, Thadius856, The Anome, The Land, The Thing That Should Not Be, The
Transhumanist, The code inside, The fate of atlantis, The psycho, TheRanger, TheRingess, Theda, Thingg, Thomasmeeks, Thorsen, Thug Thug, Tijfo098, TitaniumDreads, Tjamison, Tobias
Conradi, Tom harrison, TomTheHand, TowerDragon, Toxic Waste, Treborbassett, Treisijs, Trevor Andersen, Trialsanderrors, Triathematician, Triwbe, Trovatore, Trumpet marietta 45750,
Trusilver, Tslocum, Twerges, Twri, UAIED, Ultramarine, Velho, VeroTheMaster, Vervin, Vextron, Viridae, Viridian Moon, Viriditas, Volunteer Marek, Warhol13, Wavelength, Wayward,
Wehpudicabok, WeniWidiWiki, Wiki alf, WikiLaurent, Wikiant, Wikidudeman, Wikipelli, Wiknickwik, Will2k, Wilsonater23456, Wolfkeeper, Wragge, Www.msn.com, Wxlfsr, Xanzzibar,
Xchatter, Xeworlebi, Yacht, Yhkhoo, Yodler, Yoichi123, Youngtimer, Zafiroblue05, Zeno Gantner, Zoicon5, Zscout370, Zsniew, Ztobor, Zundark, Zvika, Zzuuzz, Саша Стефановић, Ъыь, 750
anonymous edits

Nash equilibrium  Source: http://en.wikipedia.org/w/index.php?oldid=466030540  Contributors: 7&6=thirteen, Ahoerstemeier, Alansohn, Aleksi aaltonen, Algebraist, Aliekens, Andrewpmk,
Angelobear, AnnaFrance, Art Carlson, Ashutosh.gupta.88, AugPi, AxelBoldt, Bartonjs, Beastly21, Benjamin1414141414141414, BiT, Bidabadi, Billlin12345, Birkett, Bissinger, Bluemoose,
Brogersoc, Bryan Derksen, Charles Matthews, Chiguel, Chinju, Chrisbbehrens, Christian Kreibich, Christopher Parham, Chuanren, Closedmouth, Conrado, Coppertwig, Counterfact, Crazy
coyote, Cretog8, DFRussia, DanTilkin, Danielcherian, Daniellewis24, Darwinek, Daveagp, DavidLevinson, Deb, Debresser, Delaszk, Denisarona, Denisutku, Dirac1933, Discospinster,
DougsTech, DrKiranKalidindi, Dratman, Drpaule, Dynamo152, EconoPhysicist, Ecthelion83, Ed Poor, El T, Ern malleyscrub, Eshock, Evercat, Everyking, Falcon8765, Fioravante Patrone en,
FizzyP, Forich, Frecklefoot, Fredericksgary, FutharkRed, Gallando, Gauss, Geometry guy, GeorgeLouis, Gerhardvalentin, Ghemachandar, Giftlite, Gillis, Gioto, Golgofrinchian, Gschmidt,
Gtcs-student, Guyfromthe80s, Hadal, Hairy Dude, Halcyonhazard, Hede2000, Hedywong, Henrygb, Homunq, Hubriscantilever, Hut 8.5, I am not a dog, ImperfectlyInformed, Inadarei,
Jadhavdevendra, Jandalhandler, Japiot, Jazzazi, Jeekc, Jeff Muscato, JesseStone, Johnnygeekiwi, Jonatanj, Joriki, Joro Iliev, Jpgordon, Jyotirmoyb, KDS4444, Kamilborkowski3, Karada,
Karl-Henner, Kborer, Kcordina, Kerrylily, Kotu Kubin, Kraftlos, KrakatoaKatie, Krohde, Kwantus, KyleP, Kzollman, Landroni, Lee Daniel Crocker, Leejefon, Lekrecteurmasque, Lendu,
LiDaobing, Liftarn, Linuxrocks123, Little-man, Luqui, MONGO, Marcello511, Marqueed, MathiasRav, Maurice Carbonaro, Maximus Rex, Michael Hardy, Mike1024, MikeHearn,
MinorContributor, MisterSheik, Msavidge, Muinchille1, Musiphil, Naddy, Naingmoeaung, NashMelo92, Neanderthalcouzin, Nimlot, Novacatz, Obradovic Goran, Oleg Alexandrov, Olivier,
Oxymoron83, Paul August, Pde, Pete.Hurd, Petrus, PhilipMW, Phorapples, Pippo2001, PoetblogMatters, Popsofctown, Poromenos, Psiphiorg, Pyschobbens, Razorflame, Rbb l181, Rgclegg, Rich
Farmbrough, Riley Schumacher, Rinconsoleao, RobertHannah89, Rrburke, RussianCash, SDY, SMcCandlish, Salgueiro, Salix alba, Samedina, Sampo, SanitySolipsism, Sbisolo, Sfarringvt,
Sfiller, ShaunMacPherson, Shoujun, Shrinershriner, Silly rabbit, SlamDiego, Slizyboy, Smalljim, Smmurphy, Socrtwo, Spike Wilbury, SpuriousQ, Steve Quinn, Steverapaport, Student396,
SunCreator, Sundar, Sureshpurohit, Tarotcards, Thumperward, Thunk, Tideflat, Titoxd, ToastyKen, Treborbassett, Trialsanderrors, Uhjoebilly, Ulflarsen, UnitedStatesian, User A1, Uucp,
Violeaf, Vladimer, Volunteer Marek, Weakerthanyou, Whpq, Willworkforicecream, WojPob, Wolf87, Wolfkeeper, Woodforc, Woudloper, Wragge, Yayay, Yogi de, Zhou Yu, Zkiraly,
Zosimos101, Ъыь, 420 anonymous edits

Cooperative game  Source: http://en.wikipedia.org/w/index.php?oldid=464407864  Contributors: Admiller, Amire80, Anonash, Arthur Rubin, Artman40, CRGreathouse, Ciphers, ClanCC,
Cretog8, CronopioFlotante, Dan Polansky, DayReader, Diego Queiroz, Downtown dan seattle, Edward, Epbr123, Ezrarez, Fioravante Patrone en, Gadfium, Gaius Cornelius, Gtcs-student, Gurch,
Gwernol, Hughdbrown, Jhertel, John Quiggin, Koczy, Krauss, Kzollman, Littleelf, Lph, Maulattu, Maurice Carbonaro, Mfandersen, Michael Hardy, Mousomer, Mu Gamma, Omkardpd,
Ortolan88, Pete.Hurd, RCSB, Rjwilmsi, RobertMeusel, TMott, Theorist2, Toby Bartels, Unschool, VictorAnyakin, Vincent.feltkamp, Wikidea, Wikiwert, Wragge, 38 anonymous edits

Information set  Source: http://en.wikipedia.org/w/index.php?oldid=419128499  Contributors: AJackl, CRGreathouse, Cretog8, GeorgeLouis, Grubber, Infvwl, Kzollman, LachlanA, Loihsin,
Mennonot, Nicoulas, PatrickFlaherty, Pete.Hurd, PigFlu Oink, Tijfo098, Treborbassett, 4 anonymous edits

Preference  Source: http://en.wikipedia.org/w/index.php?oldid=463792552  Contributors: DocOfSoc, Forich, Mild Bill Hiccup, Netknowle, Xezbeth, 7 anonymous edits

Normal-form game  Source: http://en.wikipedia.org/w/index.php?oldid=463403625  Contributors: Amire80, Charles Matthews, Ciphers, Counterfact, Cretog8, Earth, Edward, Enochlau, Gillis,
Jay Gatsby, Joerite, Kzollman, LOL, Materialscientist, MaxSem, Neifion, Obradovic Goran, Oliphaunt, Pete.Hurd, Starrynte, Treborbassett, Tregoweth, 39 anonymous edits

Extensive-form game  Source: http://en.wikipedia.org/w/index.php?oldid=455429869  Contributors: Burschik, Camrn86, Cherkash, Ciphers, Counterfact, Cretog8, Dnapoli, GeorgeLouis,
Henrygb, Kirtag Hratiba, Kzollman, Mandarax, Obradovic Goran, Oliphaunt, Pete.Hurd, Rich Farmbrough, Rinconsoleao, Tgwizard, Tijfo098, Treborbassett, WhiteC, 36 anonymous edits

Succinct game  Source: http://en.wikipedia.org/w/index.php?oldid=458807917  Contributors: Bender235, Hermel, Itai, LachlanA, Malcolma, Tijfo098

Trembling hand perfect equilibrium  Source: http://en.wikipedia.org/w/index.php?oldid=450779053  Contributors: Abdel Hameed Nawar, Amcfreely, Bromille, Cfp, Giftlite, GregorB,
Gtcs-student, Hubriscantilever, IKiddo, Janlo, Kzollman, Meijerink, Pete.Hurd, Quirky, Rozza69, Smmurphy, Trialsanderrors, Yaris678, 23 anonymous edits

Proper equilibrium  Source: http://en.wikipedia.org/w/index.php?oldid=392553830  Contributors: Bromille, Counterfact, Henriqueroscoe, 2 anonymous edits

Evolutionarily stable strategy  Source: http://en.wikipedia.org/w/index.php?oldid=466190556  Contributors: Ajbrown141, Aliekens, Anxietycello, Ashlaender, Barticus88, Benja, Benlisquare,
Brighterorange, BrokenSegue, Bueller 007, Cat's Tuxedo, Conversion script, Coontie, DFRussia, Darker Dreams, Duncharris, EconoPhysicist, Emperorbma, Erianna, Falcor84, Fredrik,
Geeteshgadkari, Geometry guy, Gerrit, Graham87, Hairy Dude, Hannes Röst, Hequba, Homunq, I am not a dog, Igodard, Ilmari Karonen, Jackzhp, Karada, Kelly Martin, Khamsin, Krohde,
Kzollman, Landroni, Lexor, Limbo socrates, Lotje, Marc Harper, Matttttt, Memills, Michael Hardy, Michael Rogers, Minesweeper, Monado, Mpadowadierf, NCurse, Nanobri, Noisy, Pete.Hurd,
Phoenix-forgotten, Quadell, RDBrown, Rettetast, Rich Farmbrough, Richard001, Rjwilmsi, Seijihyouronka, SidP, Steverapaport, Sundar, TedPavlic, Template namespace initialisation script, The
Anome, TimVickers, Treefrog007, Trialsanderrors, Uriobolski, User A1, WillWare, 51 anonymous edits

Risk dominance  Source: http://en.wikipedia.org/w/index.php?oldid=450136632  Contributors: CambridgeBayWeather, LukeNukem, Novidmarana, Rinconsoleao, Smmurphy, Tabletop,
Trialsanderrors, 13 anonymous edits
Article Sources and Contributors 241

Self-confirming equilibrium  Source: http://en.wikipedia.org/w/index.php?oldid=379205831  Contributors: Benja, BlueNovember, Ciphers, Cretog8, Fconaway, GeorgeLouis, 1 anonymous
edits

Dominance  Source: http://en.wikipedia.org/w/index.php?oldid=453868490  Contributors: Afelton, Bender235, Btyner, Bushytails, Cambrasa, CambrianXp, Christopher Parham, DavidCary,
Drorsh, Dysepsion, Iota, Isnow, Isomorphic, Jeekc, Jokestress, Kzollman, Madmardigan53, MountainGoat8, MrOllie, Pete.Hurd, PiccoloNamek, Rentier, Shiyang, Skoosh, SmartGuy,
Smmurphy, That Guy, From That Show!, Trigger hippie77, Ъыь, 40 anonymous edits

Strategy  Source: http://en.wikipedia.org/w/index.php?oldid=462985948  Contributors: AdamSmithee, Alexsalgado, Bender235, Benja, Bokken, Cesiumfrog, Chewie, Cretog8, Cureden, David
Eppstein, Dirnstorfer, Eastlaw, Elf, Frank Guerin, Ftm.ashari, Giftlite, Gioto, Gwern, Jitse Niesen, Jlmark3, Kilva, Kzollman, Liger2233, Littlebear1227, Llanowan, MercyBreeze, Mousomer,
Obradovic Goran, Pb30, Pete.Hurd, Richard Lotspard, Rinconsoleao, Schneelocke, SimonP, SmartGuy, TheOracle23, Tijfo098, Treborbassett, WadeSimMiser, Wragge, Xeeron, ‫ﻣﺎﻧﻲ‬, 29
anonymous edits

Tit for tat  Source: http://en.wikipedia.org/w/index.php?oldid=461509857  Contributors: AdamAtlas, Adoniscik, Amorymeltzer, Andycjp, Anetode, Arnoutf, Asrabkin, AstroHurricane001,
Averykrouse, AxG, Barboakley, Before My Ken, Bender235, Bengtlueers, Bhadani, Bmdavll, Boyd Reimer, Brichard37, BrokenSegue, Broozo, CWY2190, ChangChienFu, Chealer,
CheekyMonkey, Chipmunk393, Colton3x3, Coneslayer, Cretog8, Dangherous, David Latapie, DenisDiderot, Dkristoffersson, Domokato, Dreaded Walrus, Dwarf Kirlston, E=MC^2, Eltener,
Etaonish, Eurobas, EvanProdromou, Faradayplank, GRuban, Gary King, Geldart, Goethean, Good Olfactory, Grick, Hadal, Halcyonhazard, Hephaestos, Heyhay, Hughdbrown, Humblefool,
ImperatorExercitus, J.Gowers, JForget, JPK, Jfischoff, John Fader, Kzollman, Lifefeed, Lotje, Lovelac7, Luis Dantas, MER-C, Maarten Hermans, Machine Elf 1735, Madmardigan53,
Marasmusine, Marc Harper, Marcel Kosko, Maurice Carbonaro, McGeddon, Melsaran, Michael Hardy, Michael Snow, Michaledwardmarks, Misiekuk, Moloch09, Momentfarm, Mr.E,
MrSomeone, Nightscream, Nobel prize 4 peace, OlEnglish, Oliphaunt, Omphaloscope, Onodevo, Pete.Hurd, Pinethicket, Pnevares, Punctured Bicycle, Quantling, Quebec99, R, R Lowry,
Rumiton, Ryan Norton, Sarranduin, Seans Potato Business, Sgeureka, Simishag, SiriusB, Smmurphy, Spencerk, Stephen B Streater, Stepheng3, Stevechilton, Steverapaport, Superbock, Taejo,
Tamfang, TechnoFaye, TelopiaUtopia, TheSix, ThreeDee912, Toytoy, Trialsanderrors, Twas Now, UnDeRsCoRe, XxTimberlakexx, Yayay, ZeroOne, 119 anonymous edits

Grim trigger  Source: http://en.wikipedia.org/w/index.php?oldid=458782730  Contributors: Austriacus, Bender235, Bfinn, Davetron, Elembis, Gabriel.c.drummond.cole, Giftlite, Gnomus, John
Quiggin, Kzollman, Louisng114, Michael Hardy, Primergrey, Spincrisis, Suzanne0-0, Tktktk, Wragge, 3 anonymous edits

Collusion  Source: http://en.wikipedia.org/w/index.php?oldid=465314112  Contributors: 16@r, Avochelm, Betacommand, BigFatBuddha, Bluemoose, Buckyboy314, CapitalSasha, Captaincoop,
Carinasl, Chendy, DarkSaber2k, Eastlaw, F15x28, Falcon8765, Finalius, Freechild, Gabi S., GregorB, Icairns, J.DrWatson, Jezmck, Jguzmanb, Jonkerz, Katach, Kozuch, Kzollman, LaidOff,
Lycurgus, Mcdennis13, Neelix, Nick Number, Nightenbelle, Ninly, Operknockity, Pete.Hurd, Philippe, Phydend, R Lowry, RattusMaximus, Rccoms, RekishiEJ, ReluctantPhilosopher, Rich
Farmbrough, Ronz, Russ Anderson, Sardanaphalus, Scyth, Serverradar9, Smallman12q, SpuriousQ, Stevec240, Stevertigo, Struway, Tassedethe, Tb, Timc, Uniqueuponhim, Violetriga, Vranak,
Wachen, WikHead, Winhunter, Wonglong, Woohookitty, 113 anonymous edits

Backward induction  Source: http://en.wikipedia.org/w/index.php?oldid=425652305  Contributors: Anomalocaris, Bediako, Diligent, Dnjkirk, EagleFan, EconoPhysicist, Fioravante Patrone en,
Giftlite, Gregbard, Hongooi, Intimaralem85, Iridescent, Jersey Devil, Kristinw, Kzollman, Lambiam, Luqui, McSly, MiNombreDeGuerra, Michael Hardy, Mineminemine, Omnipaedista, Paul
August, Quarl, Rinconsoleao, Slanderson, Sloth monkey, Tijfo098, User A1, Vagary, William M. Connolley, 13 anonymous edits

Markov strategy  Source: http://en.wikipedia.org/w/index.php?oldid=411897358  Contributors: AndrewHowse, LachlanA, Malcolma, Megaloxantha, Severo, Thekohser, Tim1357, Tktktk, 2
anonymous edits

Symmetric game  Source: http://en.wikipedia.org/w/index.php?oldid=428208111  Contributors: Antimatroid, Ciphers, Dark.knight ayush, HelicopterCrisps, Humanengr, Kzollman,
Lionelkarman, Michael Hardy, Obradovic Goran, PV=nRT, Pete.Hurd, Robinh, Tijfo098, 7 anonymous edits

Perfect information  Source: http://en.wikipedia.org/w/index.php?oldid=455340706  Contributors: Alexsoddy, Apothecia, Arvindn, Batmanand, Bluemoose, Bryan Derksen, Byelf2007, Ciphers,
Commander Keane, Cretog8, DirkOliverTheis, Dreispt, Grick, John Quiggin, King brosby, Kmweber, Kzollman, La goutte de pluie, Levineps, Lowellian, M.nelson, Madmardigan53, Maurreen,
Michael Hardy, Mikhail Dvorkin, Morven, Mozzie, Nomen4Omen, Pearle, Pete.Hurd, Petter Strandmark, Rast, RayBirks, Rich Farmbrough, Rjanag, TallulahBelle, Tetron76, Tobias Bergemann,
Toobaz, Tornadowhiz, Treborbassett, Vincom2, 19 anonymous edits

Simultaneous game  Source: http://en.wikipedia.org/w/index.php?oldid=418494059  Contributors: Ciphers, GoingBatty, RomualdoGrillo

Sequential game  Source: http://en.wikipedia.org/w/index.php?oldid=429186552  Contributors: Calvinballing, Ciphers, Henrygb, Isomorphic, Kzollman, Lord Hidelan, Martpol

Repeated game  Source: http://en.wikipedia.org/w/index.php?oldid=447282159  Contributors: AMuseo, Aiken drum, Andropod, Bender235, Ccerer, Ciphers, Dreadstar, EconoPhysicist, Gaius
Cornelius, Gertasik, HieronymousCrowley, JerroldPease-Atlanta, Kzollman, M3taphysical, Msoos, Pete.Hurd, Phyte, Smmurphy, Sudderth1, User A1, Yatinkr, Ъыь, 30 anonymous edits

Signaling games  Source: http://en.wikipedia.org/w/index.php?oldid=419258654  Contributors: Antique cuckhoo clock, Bender235, Chuunen Baka, Father Goose, Fieldday-sunday, Gbeeker, Jni,
John Quiggin, Kolyma, Kzollman, Linas, Luk, Neelix, Pete.Hurd, Rhbest, Rjwilmsi, Signalsbanduk, Smmurphy, Smoothhenry, Some standardized rigour, Tgetty, Timothyjlayton, Treborbassett,
28 anonymous edits

Cheap talk  Source: http://en.wikipedia.org/w/index.php?oldid=453887719  Contributors: 6birc, Ashmoo, Bender235, Bluemoose, CloudNine, Cogiati, David Sneek, Dr Greg, Ephraim33,
Gontrode, Headbomb, Iyerkri, John Quiggin, Koavf, Kzollman, Lexor, MangoWong, MountainGoat8, Pearle, Pete.Hurd, RJFJR, Reetep, Rjwilmsi, Treborbassett, 9 anonymous edits

Zero-sum  Source: http://en.wikipedia.org/w/index.php?oldid=419976035  Contributors: 164.58.10.xxx, Alexf, Alfanje, Alon, Andre Engels, Andycjp, AnonMoos, Arthur Rubin, Ash211,
August1991, BD2412, Bakanov, Bakerstmd, Bender235, BiT, Bromille, Bryan Derksen, CRGreathouse, Ciphers, Comfortably Paranoid, Complex (de), Conversion script, Cougarbate,
Courcelles, Cretog8, DavidScotson, Derek Ross, Dgsaunders, Djozwebo, Donfbreed, DrDentz, E23, Egriffin, ElectricRay, Ellywa, Emperorbma, Emurphy42, Erianna, Fioravante Patrone en,
Frankk74, Gappiah, Geir Gundersen, George Richard Leeming, Giftlite, Glengordon01, Goethean, Grafen, Graham87, Greenhope01, Guswen, Handcuffed, Henrygb, Hobo loquens, Hq3473,
Hutschi, Ihope127, Ixphin, JHunterJ, JamesLucas, Jan Hidders, Jel, Jmrowland, Karlscherer3, KennethJ, Kim Bruning, Klanda, Kocio, Koveras, Kzollman, La goutte de pluie, Lacrimosus,
LokiClock, Lord of the down, Luna Santin, Marc K, Marcus Brute, Moopiefoof, Mousomer, Myasuda, Mydogategodshat, Nate1481, Netoholic, Nowordneeded, Obradovic Goran, OldNick,
PCM2, Peregrine981, Personman, Pete.Hurd, Prolinol, Psztorc, Pyb, R Lowry, RTV User 545631548625, Risi, Silverxxx, Smallbones, Smmurphy, Solipsist, Starryboy, Suisui, Svetovid, Tajsis,
Talldean, Tennekis, Tha human, This, that and the other, Thomasmeeks, Tijfo098, Timrollpickering, Tmh, Tobias Hoevekamp, VictorAnyakin, Volunteer Marek, Weathereye, Weien, Welsh,
Wik, Williamlindgren, Wordyness, Yatinkr, Zakuragi, Zoganes, Zsniew, 120 anonymous edits

Mechanism design  Source: http://en.wikipedia.org/w/index.php?oldid=454428961  Contributors: Aitch Eye, Aknxy, Akriasas, Alexwch, Cretog8, David Eppstein, Dexter inside, DocWatson42,
F.khanmirzaee, Gary King, Grochim, Gtcs-student, Halcyonhazard, Isomorphic, Jamesontai, Jfraffen, John Quiggin, Jon Roland, KimvdLinde, Liberal Saudi, Lycurgus, MaCRoEco,
Masterpiece2000, Mattisse, Michael Hardy, Neilbeach, Nonempty, Ogo, Oleg Alexandrov, Oliphaunt, Panda, Pcm.nitdgp, Pcm1.nitdgp, Pde, Ph.eyes, Rjwilmsi, SOFTBOOK, Shoeofdeath,
StephenWeber, The Anome, TheTito, Theorist2, Thomasmeeks, Tobacman, Toytoy, Unitanode, UnitedStatesian, Utilitus, Voodoom, Wikispan, WillWare, Xenon54, Zeno Gantner, 33
anonymous edits

Bargaining Problem  Source: http://en.wikipedia.org/w/index.php?oldid=424188631  Contributors: Akalai, Allliam, Bender235, Cfp, Cretog8, Dondegroovily, E.qrqy, Ever wonder,
Floquenbeam, Giraffedata, Jessieliaosha, MGM08314, PigFlu Oink, 22 anonymous edits

Stochastic game  Source: http://en.wikipedia.org/w/index.php?oldid=461678052  Contributors: Aneyman, Bender235, Cuaxdon, David Eppstein, Dcoetzee, Keynes.john.maynard,
Kiefer.Wolfowitz, PhS, Rjwilmsi, Sandius, Sudde001, Tijfo098, 17 anonymous edits

Large poisson game  Source: http://en.wikipedia.org/w/index.php?oldid=417965209  Contributors: Bdemeshev, Bender235, Dyaa, Mild Bill Hiccup, 3 anonymous edits

Nontransitive game  Source: http://en.wikipedia.org/w/index.php?oldid=412330152  Contributors: Bender235, Gregbard, Jitse Niesen, JocK, Michael Hardy, Sam Staton

Global game  Source: http://en.wikipedia.org/w/index.php?oldid=379060891  Contributors: Amalas, Bearcat, CronopioFlotante, Michael Hardy, Rinconsoleao, Tobacman

Prisoner's dilemma  Source: http://en.wikipedia.org/w/index.php?oldid=465265517  Contributors: 129.186.19.xxx, 12mmclean, 7&6=thirteen, 848219pineapple, AdRock, Adam Conover,
Akerans, Alai, Alaiche, AlanM1, Alex3917, Alison, AllanLee, Alsandro, Amead, Americanhero, Andrejj, Anetode, Angelbo, Annielogue, Aranherunar, Artoasis, Arvindn, Asaadi, Ascorbic,
Atlant, Audacity, AxelBoldt, Az7997, Babij, Baccyak4H, Badger Drink, Bando26, Barrrower, Behavioralethics, Benjamin H-W, Bhause, BigK HeX, Bigturtle, Blackmetalbaz, BlueNovember,
Blueyoshi321, Bona Fides, BookgirlST, Bossrat, Boud, Bouke, Bracton, BracusAnguis, Brainpo, Brian Kendig, Brichard37, Brighterorange, Brokenfixer, Bruguiea, Bryan Derksen, Btwied,
BurdetteLamar, CBM, CRGreathouse, Calabe1992, Calculuslover800, Can't sleep, clown will eat me, CapitalR, Carloszgz, Causa sui, Cedrus-Libani, Chinju, Chipuni, Chris 73, ChrisG,
Chrishmt0423, Christofurio, Chriswaterguy, Chromaticity, Ciaran H, Cmh, Cntras, Cokaban, Colonies Chris, Conversion script, Cpiral, Cpt, Cretog8, Cryptic C62, Cyrius, Cyrus Grisham,
DEDemeza, DHN, DTM, Da Vynci, Damuna, Dan East, Dandrake, Daniel Mahu, David Gerard, DavidScotson, Dcoetzee, Dcsohl, Deadbath, Deltabeignet, Denis Diderot, DenisDiderot,
Article Sources and Contributors 242

Dimitridobrasil, Dogcow, Dominus, Doradus, Dr.K., Drmies, Duncharris, Dunhere, Dying, Dysprosia, EPM, EchetusXe, Edcolins, Eebster the Great, Emersoni, Emhoo, Eric119, Erin Billy,
Erroneous01, Eruantalon, Estudiarme, Etaonish, Eurosong, Evercat, Every name is taken12345, FWBOarticle, FatalError, Fedfiore, Fenwayguy, Fiskbil, Flambelle, Foobar, Fourohfour, Freddy
engels, Fredrik, Frostlion, Frymaster, Furrykef, Gaius Cornelius, Gakmo, Geekguy02, Giftlite, Givegains, Glasbak, Glummy, Goarany, Graham87, Gregvs3, Grick, Griffin147, Gronky, Grosscha,
Guido del Confuso, Gus Polly, H2g2bob, Hadžija, Haonhien, Hawkinsbiz, Haya shiloh, Helfrich, Henry Flower, Hibernian, Hiroshi-br, Hodg, Hu, Huliosancez, Hurmata, Ilmari Karonen,
Inkbacker, InverseHypercube, Iridescent, Iusewiki, IxnayOnTheTimmay, J.delanoy, JRSpriggs, Jason.grossman, Jay32183, JayJasper, JebJoya, Jedibob5, Jedonnelley, JeeAge, Jeff0106,
Jmcc150, Jmrowland, Jnestorius, Joao, JocK, JohnAlbertRigali, Johnleemk, Jonathanstray, Jpkotta, Jsevilla, Jtneill, Junes, Jwrosenzweig, KConWiki, Kencf0618, Kenneth M Burke, Khoikhoi,
King of Hearts, KnightRider, Kozuch, Kubigula, Kurt Jansson, Kzollman, LC, LMB, Lambertman, Lamentation, Lawrencekhoo, LeonardoRob0t, Ligulem, Likebox, Lindosland, Logophile,
Loom91, Lotje, Lpetrazickis, Luk, LukeNukem, LukeSurl, Luna Santin, Lupin, Lyran, Maduskis, Magister Mathematicae, Malleus Fatuorum, MapsMan, MarSch, Mariojalves, Marskell, Martijn
faassen, Masterofpsi, Materialscientist, Matthew Stannard, Mattisse, Mav, Maximus Rex, Michael Hardy, Michael Rogers, Michelle eris, Mikhailfranco, Mikolik, Mindmatrix, Mkcmkc, Mlm42,
Moon light shadow, Morgrim, Mrand, Mtu, Mullibok, NeonMerlin, Neutrality, NickLinsky, Nikodemos, Nlasbo, Noisyboy1234, Nwe, Ocam, Octopuppy, Ojigiri, Oliphaunt, Olivier, Orange
Suede Sofa, Orbst, Osterczyk, OwenBlacker, OwenX, PamD, Pandacomics, Pathless, PaulStephens, Pete.Hurd, Pfortuny, Philippe277, Plumbago, Ponder, Poor Yorick, Psb777, Quantling,
Qwerty1234, R Lowry, R3m0t, RJII, RZ heretic, Radeks, Ral315, Ramorum, Randomblue, Raul654, Ravik, Reagle, Redrocket, RexNL, Reywas92, Rgoodermote, Rich Farmbrough, Richard001,
Richfife, Rik G., Robinh, Romanm, Rompe, Rosuav, Royan, Royboycrashfan, Rracecarr, Ruud Koot, Ruy Lopez, RyanDesign, Ryanhanson, Ryguasu, Saforrest, Samohyl Jan, SandyGeorgia,
Sbloch, Sdorrance, Seabreezes1, SeptimusOrcinus, Shaggorama, Sikelianos, Simishag, Simultaneous movement, Smmurphy, Snied, Snoyes, Socrtwo, Solitude, SpNeo, Space Pirate 3000AD,
Spencerk, SpookyMulder, Sslevine, Starpollen, Stazed, SteinbDJ, Stekat, Stephen B Streater, Syzygy, TRBP, TWCarlson, Tabletop, Tacomaster4, Tad Lincoln, Taejo, TakuyaMurata, Tedernst,
Template namespace initialisation script, Tempshill, Tfll, ThAtSo, That Guy, From That Show!, The Anome, Thehornet, Themissinglint, Tide rolls, Timwi, Tloc, Tluckie13, Toi, Tom harrison,
Treborbassett, Trialsanderrors, Tribaal, Trovatore, TurilCronburg, Tyciol, Uniqueuponhim, Urgos, Vapour, Verloren, Vintermann, ViperSnake151, Volunteer Marek, Vorapsak, Vt-aoe, W anthro,
Walkie, Wanani, Webponce, Westsider, Wfeidt, WhatamIdoing, Whisky drinker, Wile E. Heresiarch, Wilfried Elmenreich, Wolfkeeper, Wolfman, Woohookitty, Wragge, Wwengr, XLerate,
Xyzzy n, Xyzzyplugh, Yacht, Yvh11a, Zafiroblue05, Zsniew, Ъыь, 591 anonymous edits

Traveler's dilemma  Source: http://en.wikipedia.org/w/index.php?oldid=456029225  Contributors: Another Believer, C-randles, CRGreathouse, Chipuni, Conical Johnson, Connelly, Giftlite,
INVERTED, JocK, Kzollman, Megaloxantha, Michael Hardy, Miss Madeline, R.e.b., Reywas92, Rjwilmsi, RucasHost, SneakyTodd, Troped, Venado, 17 anonymous edits

Coordination game  Source: http://en.wikipedia.org/w/index.php?oldid=456027848  Contributors: Buckyboy314, Discospinster, EAderhold, JaGa, KrakatoaKatie, Krauss, Kzollman, Maurice
Carbonaro, NE2, Pete.Hurd, Rich Farmbrough, Rinconsoleao, Roisterer, ST47, Signalhead, SunCreator, Trialsanderrors, Vipinhari, Wragge, 27 anonymous edits

Chicken  Source: http://en.wikipedia.org/w/index.php?oldid=465044621  Contributors: Aardark, Aaron Brenneman, Abljkgjkf, AdmiralHood, Alansohn, Aleph Infinity, Alfanje, Aliekens,
AnOddName, Antandrus, Apelbaum, Apparition11, Aris Katsaris, Avengerx, Babaroberto, Bkell, BrotherE, Bryan Derksen, Burner0718, Ceetar, CesarB, Complex01, Cookiemobsta, Cretog8,
DFRussia, Dcoetzee, Dhp1080, Discospinster, Dismas, Donreed, E1890, East718, Edratzer, Elizabeyth, Emurphy42, Ergotius, Evercat, Ezadarque, Flyguy649, Gakmo, Gamer123456754321,
Geniusdude247, Geometry guy, Ginsengbomb, Glane23, GregorB, HiDrNick, HieronymousCrowley, Hut 6.5, Hydrogen Iodide, Invertzoo, Isopropyl, JHMM13, Jak86, Jamie C, JocK, John Link,
Jorend, Kateshortforbob, KnightofNEE, Kolobochek, Kuru, Kylu, Kzollman, Ld100, LilHelpa, Luis r izquierdo, Luna Santin, Malcohol, Meelar, Michael Hardy, Mikedelsol, Mindmatrix, Nice
poa, Ninakscgirl21, Novamo, Orudge, Ospalh, Pete.Hurd, Pils, Pinkkeith, Pnm, Polynomial4456, Preposterone, Pretzelpaws, RPGmaple, Rasmus Faber, RexNL, Reywas92, Rgoodermote,
Ricardolacombe, Rjwilmsi, Savidan, SchmuckyTheCat, Scorpion451, Sir Trollsalot, Spacey, Spencerk, Stevechilton, Stevertigo, Superm401, Tali.g, The ed17, TimVickers, Timwi,
Trialsanderrors, Ultrahaggis, Unyoyega, Vanished user 03, Volunteer Marek, Vroo, Walkiped, Zanuga, Zoltan271828, Zsniew, 140 anonymous edits

Centipede game  Source: http://en.wikipedia.org/w/index.php?oldid=427609988  Contributors: Action Jackson IV, Agamemnus, Arthur Rubin, Bender235, Blahedo, C-randles, CRGreathouse,
Charvest, Christopher Parham, Chwech, Crazy Boris with a red beard, D.brodale, DerBorg, Elembis, Enochlau, Gioto, Glane23, GregorB, Ihope127, JimVC3, Jrouquie, Kzollman, LarryJeff,
Maimone, Matt314, Mau.zachrisson, Michael Hardy, Michael Rogers, Mushyrulez Alot, NYKevin, Parcoj, Pengo, Perryar, Psztorc, Qe2eqe, Rbrwr, Rsmead, SE16, Vagary, Voidmaw, Xs935, 36
anonymous edits

Volunteer's dilemma  Source: http://en.wikipedia.org/w/index.php?oldid=461728270  Contributors: Amalas, Bender235, Cybercobra, Edward, EffeX2, JeffreyGomez, JocK, M3taphysical,
Michael Hardy, Niceguyedc, Nixeagle, Reinyday, Remember, Sbloch, Tesseran, VascoAmaral, Xanzzibar, 18 anonymous edits

Dollar auction  Source: http://en.wikipedia.org/w/index.php?oldid=438186721  Contributors: Acroterion, Aliasad, Alta-Snowbird, Bender235, Canned Soul, Capricorn42, Cmdrjameson, Comet
Tuttle, Coppertwig, Cretog8, DMCer, Ddstretch, Dejan Jovanović, DropDeadGorgias, Ejwong, ElectricRay, Ettrig, Happyhappyallhappy!, J.delanoy, Kent Wang, Kzollman, Lambiam, Mtjaws,
NawlinWiki, Oerjan, OneWeirdDude, Parhamr, Payam prz, Pete.Hurd, Philwelch, R'n'B, Rasmus Faber, Rufasto, SURIV, Scorpion451, Sebastark, Turkeyphant, Vahe Kharazyan, Volunteer
Marek, Whosasking, Xaosflux, 34 anonymous edits

Battle of the sexes  Source: http://en.wikipedia.org/w/index.php?oldid=459878008  Contributors: Akerans, Alexis Brooke M, Arthur Rubin, Bluemoose, Btwied, Buckyboy314, Cheater no1,
Cyrus Grisham, Danh, Davidbod, Elexhobby, Ergotius, Eve Teschlemacher, Gcc111, Greg Tyler, Jrouquie, Jswitzer, KingShibby, Kulyuhkldffsdsfesfsdf, Kzollman, Melchoir, Musiphil,
Pete.Hurd, Quest for Truth, Reyk, Reywas92, Ringbang, Robbie314, Ruinia, Sam Hocevar, Scorpion451, SemanticMantis, Sewebster, Stpasha, Trialsanderrors, Yausmaam, Zsniew, 37 ,‫דניאל צבי‬
anonymous edits

Stag hunt  Source: http://en.wikipedia.org/w/index.php?oldid=456934348  Contributors: AlexP, Applejuicefool, BD2412, Burpen, Claymore, Clearlyhidden19, CommonsDelinker, Cruci, Derek
Ross, Enfascination, Everyking, Hmains, Kzollman, Mindmatrix, Psiphiorg, Rinconsoleao, Rsmead, Stephen B Streater, TheoClarke, Trialsanderrors, Xiangjw, 37 anonymous edits

Matching pennies  Source: http://en.wikipedia.org/w/index.php?oldid=429714692  Contributors: Art LaPella, AySz88, Bender235, Cretog8, Culix, Evercat, Fsufezzik, Kzollman, Lessthanideal,
Light current, McGeddon, Melchoir, Mild Bill Hiccup, NYKevin, Pepeeg, Pete.Hurd, Pseudoquark, Tankparksalute, Tiger Khan, Tijfo098, Trialsanderrors, Yausmaam, Zvika, 17 anonymous
edits

Ultimatum game  Source: http://en.wikipedia.org/w/index.php?oldid=464875259  Contributors: AMuseo, Aardwolf, AjitPD, Alai, Antonio Lopez, Astanton, AxelBoldt, B4hand, BanyanTree,
Behavioralethics, Binks, Bjgyg, BobHackett, Brossow, Btwied, CRGreathouse, Chris V. W., Comfortably Paranoid, Dcoetzee, DocWatson42, EPM, Emersoni, Ettrig, EventHorizon, Geometry
guy, Gizmo II, Gveret Tered, Hamiltondaniel, Headbomb, InverseHypercube, Isomorphic, Jason Recliner, Esq., Jivecat, Jjamison, Karijne, Kzollman, Landroni, Lingust, Matthewcgirling,
Michael Hardy, MountainGoat8, MrOllie, Nbearden, Osvaldi, Pascal666, Protonk, Qmwne235, Quasarblue, Rjanag, Rjwilmsi, Rlove, RyanCross, Shalom Yechiel, ShelfSkewed, Shentino,
Smmurphy, Solitude, Stolsvik, Tabletop, Technopat, Theodore Kloba, Thomasmeeks, Tokorode, Tommy, Trialsanderrors, VeritasEtLuz, VernoWhitney, Volfy, WikiSlasher, WoodenTaco,
Wragge, Yaroslav Blanter, Zachaysan, Zenomax, Zingo75, 61 anonymous edits

Rock-paper-scissors  Source: http://en.wikipedia.org/w/index.php?oldid=466081714  Contributors: .V., 0dd1, 11pnelson, 1ForTheMoney, 293.xx.xxx.xx, 3frenchhens2turtledoves1cup,
63.12.132.xxx, 65.96.213.xxx, 75th Trombone, 8012ED0177, ALargeElk, AaRH, Aaron carass, Aaronstj, Abigail-II, Acather96, Achoo5000, Admiral Norton, Adriaan, Adrian J. Hunter,
Aepryus, Aeroknight, Agvulpine, Ahoerstemeier, Ainlina, Aitias, Ajsh, Alansohn, AlexHOUSE, Alison22, All we did was die..., Amazing backslash, Anaxial, Anchors, AndrewvdBK, Andycjp,
Anetode, Angelbo, Angeldeb82, Anomie, Antediluvian67, Anthony Appleyard, Anárion, Apfox, Archangel127, ArielGold, Arjayay, ArqMage, ArthurDenture, Arvindn, Ashesindust, Ashmodai,
Ashmoo, Asiaindigo, Atarr, Augiedog2010, Awthur, Azndragon126, Azul, BACbKA, BD2412, Bakheer, Balloonman, Bamber100, Bamber101, Batmanand, Beach drifter, Beland, Benbest,
Bender235, Benjaminb, Benjaminhammond, Benlisquare, Beyond My Ken, BigEyedFish, Bilderbikkel, BillFlis, Billymac00, Bladonad, Blank Frank, Bloodshedder, Bo Lindbergh, Bob the ducq,
Bobo192, Bongwarrior, Bonus Onus, Borgx, Brian Crawford, BrianKnez, Brw12, Bryan Derksen, Btljs, Btyner, Bucketsofg, Bulbaboy, Burke Libbey, Burningview, Burschik, Bwfdc, Byankno1,
Byronknoll, CDThieme, CJC47, CWY2190, CalBears99, Caleby, Can't sleep, clown will eat me, CanadianLinuxUser, Canderson7, Captain Zyrain, CaptainDDL, Carioca, CarmelitaCharm,
Carribus, Carroy, Carter, Catdude, Cattus, Caz999, Cdc, Chadloder, Charles Matthews, CharlesHBennett, Charliewynn, Chealar, Check ya noggin, Cheetahstu, Chetvorno, ChildofMidnight,
Chinasaur, Chingchongcha, Chogno98, Chris Hanson, Christopherlin, Chuck Smith, Ckatz, Cloud13, Cmsg, Cokoli, Cold Season, Colonies Chris, Commander Zulu, Conti, Conversion script,
Cormorant, Coryshrmn, Cosmetor, CosmicJake, Couldntthinkofanusername, Cptmurdok, Crazytales, Cretog8, Crossmr, Cruci, Cyberia23, Cyde, Cyp, DARTH SIDIOUS 2, DMCer, DMacks, Da
Joe, DaMeanHippo, Dalereese, Damien Prystay, Dan100, Dandv, Daniel Mahu, Daniel Olsen, Daniel,levine, Danielt998, Danny Fenton, Dante Alighieri, Daqu, Darthalex314, DarylNickerson,
Dasondas, Dave Runger, David Gerard, David spector, Davidhbolton, Davidizer13, Dcoetzee, Ddpwns, Decumanus, Dejitaru, Deltabeignet, Dex1337, Dhlstrm, Dicklyon, Dico Veritas, Diego
Moya, Discospinster, Dismas, Dissident, Diz, Djmerlin3, Dlohcierekim, Dmmaus, Dockingman, Dogman15, Dominus, DoriSmith, Dpakdel, Dragoonmac, DreamGuy, Dreamyshade, Drilnoth,
DropDeadGorgias, Druff, Dtrimm88, Dude902, Dysmorodrepanis, Dysprosia, E-Kartoffel, Eagle9141, EamonnPKeane, Earl CG, Editor510, Eeekster, Einar Myre, Elassint, Electricbassguy,
Elendal, Elonka, Eluchil, EoGuy, Eouw0o83hf, Equazcion, Erachima, EronMain, Error, Eskandarany, Esperant, Esquire1386, Euryalus, Everyking, Evil Egg, Excirial, FWBOarticle, Fantusta,
Faradayplank, FastLizard4, FatzooRPS, Favonian, Feezo, Felix Dance, Felix Wiemann, Feudonym, Fg2, FiP, Fieari, Filzstift, Flarn2006, Flyguy649, Formeruser0910, FoxInShoes, Franklint,
Frazzydee, Frederick Spek, Fredrik, Frencheigh, Fritzpoll, Frizero, FrozenUmbrella, Frungi, Fuhghettaboutit, Funandtrvl, Funk2010, Furrykef, Fæ, GRuban, GT3, Gail, Gaius Cornelius,
Gallocher E, Galzigler, Garethfoot, Gawaxay, Geir Arne, Gerwalker, Gesiwuj, Gilliam, Gkerster, Glane23, Glenn, Gmonfils, Gogo Dodo, GoodBooksMelbourne, Gracefool, Gracenotes,
Graham87, Greatal386, Gscshoyru, Gtrmp, Guliolopez, Gvf, Gzornenplatz, Habj, Hairy Dude, Halo, Halosix, Hamilton burr, Hangways, Happy-melon, Harryboyles, Haukurth, Hayama Akito,
Hcane55, Henning Makholm, Hessamnia, Heyesy, Hgilbert, Higherfrequencies, Histrion, Hogyn Lleol, Hooperbloob, Hotcoffeegirl, Hq3473, Htmnssn, Hydrargyrum, INVERTED, IVinshe,
Iago4096, Ian Pitchford, Ice Jedi5, Icundell, Idleguy, Illuvatar,, Imperial Star Destroyer, Imperialles, InShaneee, Insomniacpuppy, Inverarity, Ipsenaut, Irishguy, Isomorphic, Ivanip, Ixfd64, J.
Van Meter, J.delanoy, J03113n, JForget, JLaTondre, JMyrleFuller, JNW, JP585, Jaimetout, Jameselder100, Jamesooders, Jaranda, Jarich, Jarry1250, JasontheFuzz1, Jburt1, Jbvetter, Jedwl,
Jeeter07, Jeff8765, Jeffq, Jelly-Proxy, Jenks1987, Jeremy68, JerryFriedman, Jh51681, JimD, JimmyKooch, Jiy, Jmchuff, Johnwcowan, Jojalozzo, Jons63, Jordanburtwiki, Jorunn, Josh Cherry,
Jrincayc, Jtle515, Julianortega, Junkklc, Jusdafax, Justzisguy, K kokkinos, KGasso, Kahusi, Kangaru99, Kaszeta, Kat Malone, Kate, Kdammers, Keegan, Keegscee, Keepssouth, Keilana, Kel123,
Article Sources and Contributors 243

Kelapstick, Khaosworks, Khfan93, Kimiko, Kinitawowi, Knattypheet, Koavf, Kowloonese, Kratos94, Kubigula, Kurasuke, Kurt Jansson, Kwamikagami, Kzollman, LAlawMedMBA, LDHan,
Lambyte, Lannm, Lawrennd, Lee Daniel Crocker, Leonel-Favela, Liamdooley, Liface, LilHelpa, Lilwik, Lincoln gb, LittleDan, Liu Bei, Llakais, Lochaber, Londonsista, Loool, Looris, Lord
Emsworth, LordSimonofShropshire, Lowellian, LtPowers, Luccas, Ludraman, LukeSurl, Lukejameslarson, Luminifer, Luna Santin, M.nelson, M1j2c3s4, M4Spurs, MBisanz, MER-C, MJongo,
Mackeriv, Madchester, Maeglin Lómion, Magister Mathematicae, Malerin, Malinaccier, Managore, Mandarax, Mareklug, MarkBuckles, Markhurd, Markushes, Martarius, Martindo, Matt
Yeager, Matt the golfer, MattGiuca, MaxPower, Maxamegalon2000, Maxhippo, Maximus Rex, Maycelestia, McGeddon, MeaCulpa, MeekSaffron, Meelar, Melchoir, Mercy, Merlion444,
Merotoker1, Merphant, Michael Hardy, Michaelas10, Michaeldrayson, MickeytheMuse, MightyWarrior, Miguel Andrade, Mike R, Mike Rosoft, Mikecaps, Mild Bill Hiccup, Milk mustache 88,
Mindmatrix, Minesweeper, Minna Sora no Shita, Mirokado, MisterActually, Mjquinn id, Ml.paumen, Moadeeb, Monotonehell, Movingboxes, Mrflip, Muhandes, Murtasa, Mushroom,
Mythsearcher, N p holmes, Namflnamfl, Narcosis17, Natl1, Nbarth, Ndyguy, NeilN, Nenuial, NeoVampTrunks, NeonMerlin, Neoyamaneko, Neutrality, NewEnglandYankee, Nick, Nick
Dillinger, Nidht, Nmnogueira, Noah Salzman, Noe, Nojhan, Nolanus, NorthernThunder, Northstop, Nowah Balloon, Ntrobrn, Nulzilla, Nunh-huh, Nurben, Nycolo, Nyttend, OMGsplosion,
Olessi, Oliver Pereira, Olivier, Omicronpersei8, Opelio, Optigan13, Orrelly Man, Ortolan88, Osquar F, Ost316, OtterSmith, PBP, Pace212, PacificBoy, Paigeycat, Pakaran, Patrick, Patrickocal,
Paul August, Pepeeg, Pere Serafi, Peter Karlsen, Petesh, Pgiii, Phil Boswell, PhilHibbs, Philcha, Philip Trueman, PhilipMW, Phlegatu, Phoenix2, PhysPhD, Pichpich, Piledhigheranddeeper,
Pimemorizer, Pioggg, Pippu d'Angelo, Platypus222, Plb canada, Plumcherry, Pmlineditor, Poccil, Poetdancer, Polylerus, Poobslag, Poofdd, Porsche997SBS, Prari, Pretzelpaws, Proofreader77,
Pubaquoc, Purpy Pupple, Q0, Qartis, Quebec99, Queezbo, Quidam65, Quintin3265, Qwerfdsa12345, Qwyrxian, R'n'B, RA0808, RJHall, Radagast, RadioFan, RadioYeti, RainR, Raker, Ram4eva,
Ranatoro, Randomizedreading, Rar1ty2008, RayAYang, Reluctantpopstar, Revolus, Revth, ReyBrujo, Reywas92, Ricfromc, Rich Farmbrough, RickK, Rickterp, Ricky81682, Riodeplata,
Rjwilmsi, Rm999, Robbe, Robinh, Robofish, Rockdude164, Rockpapernazi, Roehl Sybing, Roger1990, Rogerborg, Roke, Rolypolyman, Ronhjones, Rorro, RoySmith, Rpsinjustice, Rror,
Ryulong, SAUNDERS, SM, SMcCandlish, SQGibbon, SWAdair, Sam Hocevar, Samkass, Sangajin, Satanicbowlerhat, Savidan, Scbp, Schaufensterpuppen, SchmuckyTheCat, SchuminWeb,
Scizor53, Scott Sanchez, Scott5114, ScottMHoward, Sdgranger12, Seankavan65, Secret Saturdays, Seidenstud, Seishirou Sakurazuka, Semper discens, Senor jones, SensibleOxymoron,
Sergiu.dumitriu, Sertion, Seth Ilys, Sewing, Sfordjasiri, Shaizakopf, Sharkface217, Sharksfin, Shiggy, Shoeofdeath, Sidious1701, Sietse Snel, Signalhead, Simonjwall, Simultaneous movement,
Sintaku, Siroxo, Sjl0523, Skinnyreds, Skrim, Slagheap, Slakr, Slashme, Sligocki, Smmurphy, Snoyes, Snwchick134, Solar flute, Some P. Erson, Somewherepurple, Son, Sonjaaa, Sophia,
Sopranosmob781, Sottolacqua, Sowsnek, Sparky the Seventh Chaos, SpikeTorontoRCP, Spittlespat, Squilibob, Staecker, Stephen, Stephen Bain, Stepherific, Steve03Mills, Steveohare, Stickee,
Storm Rider, Sullivan.t.j, Supahrev2, SuperMidget, Superdude0721, Suruena, Susurrus, Svensandberg, Swatjester, Synchronism, Synethos, TAKASUGI Shinji, TERdON, TJRC, TKD, TXiKi,
Taborgate, Tabrel, Tail, TakuyaMurata, Tanner Swett, Ted Mightlight, Tesseran, Tfischer, The Quill, The Thing That Should Not Be, The wub, TheBigSmoke, TheCoffee, TheConduqtor,
TheEditrix2, TheHYPO, Thesteve, Thevogel, Thingg, ThreePointOneFour, Thumperward, Thunderforge, Tickenest, Tide rolls, Timlane, Timuche, Timwi, Tlotoxl, Tombomp, Tomchen1989,
Trelvis, Trunks ishida, Truthnlove, Turkishbob, Twinkler4, Twodeel, Typer 525, Typhoonchaser, Typofixer76, Tyrenius, Ugen64, Ularevalo98, Uncle Dick, Uncle Milty, Unmitigated Success,
Unreal7, Useight, Utcursch, Vague Rant, Vegaswikian, Veinor, Verkohlen, Versus22, VidTheKid, Vinner, Vip747, Viridae, Vista4u2, Vitalyb, Volunteer Marek, Voretus, Wakablogger2,
Walden, Walkabout12, Wereon, Whitepaw, Wik, WikiPediaAid, Wikibofh, Wikidogia, Wikifier, Wikijens, Wikinist, Wikipelli, Wildtornado, Willow177, Wizardboy369, Wizardhat, Wknight94,
Wla2000, Wolfmankurd, Woob, Wragge, Wtmitchell, Wzich2, Xaa, Xihr, Xme, Xnn, XtremeRPS, XxTimberlakexx, Xyzzyplugh, YellowSno, Ynglon, Ytrottier, Yuut, Yyhkitty, ZPM, Zack
wadghiri, Zandperl, Zeimusu, Zellin, Zendonut, Zhou Yu, Zthegolfer08, Zzyzx11, Σ, 1749 anonymous edits

Pirate game  Source: http://en.wikipedia.org/w/index.php?oldid=457065613  Contributors: Age Happens, Alan De Smet, Alex3917, Am088, Andreas Kaufmann, C-randles, Cerejota,
Cptnoremac, Dartbanks, Drpaule, EdC, Ehheh, Furrykef, Geeman, HJ Mitchell, Headbomb, Henrygb, HumbleSloth, IanGrey531, JNW, Jd2718, Kzollman, Liquidblake, LukeSurl,
MartinEvensen, MattGiuca, McGeddon, Messy Thinking, Michael Slone, Mlewan, Moondoll, Moopiefoof, MrOllie, Munksgaard, Nice44449, Oxymoron01, Petter Strandmark, Preposterous,
Rcwchang, Rituraj.shukla, Roborrob, SC979, Smmurphy, Smyth, Stellmach, Tktktk, Vishahu, VladGenie, Zro, 50 anonymous edits

Dictator game  Source: http://en.wikipedia.org/w/index.php?oldid=453481448  Contributors: Astanton, B4hand, Barticus88, Batmanand, Bluemoose, Calvinballing, Chris V. W., Cleisthenes2,
Dysmorodrepanis, Econobbler, Econobuster, Ecthelion83, Elembis, Evercat, FT2, False vacuum, Grutness, Igodard, Jason Recliner, Esq., Joetroll, Joolsa123, Kzollman, Messy Thinking, Michael
Hardy, Mikespoff, Nb6, Ngchen, Pesilion, Reywas92, Rjwilmsi, Robin S, Rosiestep, SalmonHelmet, Thomasmeeks, Trialsanderrors, VanHelsing23, Wfisher, Wragge, 32 anonymous edits

Public goods game  Source: http://en.wikipedia.org/w/index.php?oldid=464749063  Contributors: AleXd, Anthon.Eff, Cmdrjameson, Craw-daddy, Darkildor, FrankTobia, Glen,
IslandHopper973, Jantin, Kzollman, Longhair, MrBoo, Optikos, PhysPhD, Quarl, Rinconsoleao, RobyWayne, Sean0608, Silverhelm, Tiger Khan, UnknowableSelf, Vinophil, Wang.Zhijian.Zju,
Wang.zhijian, Wragge, 15 anonymous edits

Blotto games  Source: http://en.wikipedia.org/w/index.php?oldid=463986182  Contributors: CRGreathouse, GregorB, INVERTED, JocK, Kilva, Petter Strandmark, R.e.b., Smacdonell, Synergy,
The Anome, 16 anonymous edits

War of attrition  Source: http://en.wikipedia.org/w/index.php?oldid=434686842  Contributors: Arob33, Canned Soul, Czar Kirk, David Schaich, Exomnium, GregorB, Gt.kls, JoeSmack, John
Quiggin, Kzollman, Loopy48, Mcdennis13, Njk92, NorthernThunder, Pete.Hurd, Rspeer, Scorpion451, Smmurphy, TelopiaUtopia, Tregoweth, Trialsanderrors, Volemak, WinterSpw, 12
anonymous edits

El Farol Bar problem  Source: http://en.wikipedia.org/w/index.php?oldid=456648115  Contributors: Bjp716, Brogersoc, Cobain, Dreadstar, Faolin42, GregorB, Hgintis, Hyphz, Kzollman,
Matwood, Myglesias, Quuxplusone, S.MahdiRazavi, Saint141, Trialsanderrors, Txomin, Vanished User 1004, Wireader, 11 anonymous edits

Fair division  Source: http://en.wikipedia.org/w/index.php?oldid=460536737  Contributors: Alexb@cut-the-knot.com, Anonymous Dissident, Buenasdiaz, Cactusthorn, Cacycle, Calvinballing,
D6, David Eppstein, Delaszk, Dfeldmann, Dmcq, Eequor, Enragedeconomist, Ergotius, Euphrosyne, Gdr, Huw Powell, Igodard, Infrogmation, InverseHypercube, Itai, Ixfd64, Jitse Niesen, John
Reid, Jonkerz, Kzollman, LeeJacksonKing, Malcohol, Matt Crypto, Melchoir, Miken32, Nbatra, Noca2plus, Norvy, Ntsimp, Oleg Alexandrov, Oxymoron83, PMajer, Pete.Hurd, Piotrus,
Psiphiorg, R'n'B, Rdancer, Rjwilmsi, Robinh, Rrh02, Shanel, Spike Wilbury, Stephen Bain, Tali.g, Thomasmeeks, Triathematician, Vicarious, Volunteer Marek, Wavelength, Wlod, Woohookitty,
41 anonymous edits

Cournot competition  Source: http://en.wikipedia.org/w/index.php?oldid=465688491  Contributors: Antonorsi, Asav, Barcturus, Bluemoose, Brisvegas, Clsrskv, Common Man, Coolian,
Coppertwig, Cretog8, Eastlaw, Fluffernutter, Frank MacCrory, Gabriel.c.drummond.cole, GregorB, Huax, Iminto, Indirap, Isnow, Jackzhp, Jagerman, Jebba, Jeff3000, Jonkerz, Karada,
Katieishot, Kelly Martin, Kochiuyu, Kylu, Kzollman, LachlanA, Landroni, Luke wainscoat, Maurreen, Melfassy, Nikit16, Protonk, Radell, Rinconsoleao, Rl, Ruarrimactire, Sergiodf, Shreevatsa,
SimonP, TheStarter, Trammerman, Treborbassett, Urbansuperstar, Vgnohz, Viridae, Volunteer Marek, Zachrome, 90 anonymous edits

Deadlock  Source: http://en.wikipedia.org/w/index.php?oldid=436435529  Contributors: Coslenchip, Cretog8, Kelseyxckannibal, Kzollman, Mnh, Richard New Forest, Vermin1302, Ziggurat, 4
anonymous edits

Unscrupulous diner's dilemma  Source: http://en.wikipedia.org/w/index.php?oldid=465522827  Contributors: Apollo Augustus Koo, Ben Standeven, Bender235, DEDemeza, Diego Queiroz,
Hede2000, Heron, Kzollman, Lusanaherandraton, OwenX, Pnm, Rich Farmbrough, Tktktk, User6985, Vroo, 8 anonymous edits

Guess 2/3 of the average  Source: http://en.wikipedia.org/w/index.php?oldid=461227873  Contributors: AmigoCgn, Bender235, Blueviking, EdC, Fnielsen, Halcyonhazard, Henrygb,
Icestorm815, Iohannes Animosus, Isomorphic, JaGa, Kortaggio, Kzollman, Leo leo, Markhurd, Mipmip, Nickybutt, Rinconsoleao, ST47, SmartGuy, Tesseran, Twthmoses, UnMatChedProWess,
22 anonymous edits

Kuhn poker  Source: http://en.wikipedia.org/w/index.php?oldid=416537759  Contributors: 2005, Alai, Bender235, Enkrates, Evercat, Ezrakilty, GregRobson, Grutness, Henrygb, Kzollman,
Lomn, Nintendere, Quest for Truth, Woohookitty, 9 anonymous edits

Nash bargaining game  Source: http://en.wikipedia.org/w/index.php?oldid=416513323  Contributors: Akalai, Allliam, Bender235, Cfp, Cretog8, Dondegroovily, E.qrqy, Ever wonder,
Floquenbeam, Giraffedata, Jessieliaosha, MGM08314, PigFlu Oink, 22 anonymous edits

Screening game  Source: http://en.wikipedia.org/w/index.php?oldid=462464163  Contributors: Dave Rebecca, Djdoobwah, Dmr2, Lmatt, Mild Bill Hiccup, Smmurphy, 2 anonymous edits

Princess and monster game  Source: http://en.wikipedia.org/w/index.php?oldid=461741048  Contributors: Amalas, Dmcq, Gnomus, Guoguo12, Headbomb, Imnotoneofyou, Jason Quinn, JocK,
Julmonn, Leonhard geupel, Maxal, Michael Hardy, Rich Farmbrough, Shuroo, Thorseth, Xnn, 41 anonymous edits

Minimax  Source: http://en.wikipedia.org/w/index.php?oldid=465609615  Contributors: A. Pichler, AllUltima, Andre Engels, Andresambrois, Anne Bauval, Artichoker, Arvindn, BKfi, Beta16,
Borgx, Bpeps, Brainix, Brews ohare, Cburnett, ChangChienFu, CharlesC, CharlesGillingham, Conversion script, Cretog8, Dante Shamest, David Haslam, DavidCary, Dr. Persi, Długosz, El C,
Erasmussen, Foobar, Gametheorist77, Garygagliardi, Geometry guy, Giftlite, Glengordon01, Grick, Henrygb, Honza Záruba, Hunyadym, Icaoberg, Imran, Jitse Niesen, Karenjc, Karl-Henner,
Kiefer.Wolfowitz, Kku, LOL, MH, Maschelos, Mat-C, MatrixFrog, MattGiuca, Maximin, Michael Hardy, Moneky, Monty845, Nbarth, NeonMerlin, Nmnogueira, OckRaz, Ohconfucius,
OliAtlason, Pegua, Pete.Hurd, PhilKnight, Philip Trueman, PsiXi, Qwertyus, Qwfp, R3m0t, RalfKoch, Remy B, Rich Farmbrough, Riitoken, Robert Dober, RobertHannah89, Robinh, Sam
Hocevar, Sampo, Scott sauyet, Sean Kelly, Shuroo, SlamDiego, Smmurphy, Sniedo, Spitfire, Syr0, Telespiza, Terry0201, Tghe-retford, The Anome, Thinboy00P, Tijfo098, Tobias Hoevekamp,
Trieper, Trovatore, UkPaolo, Wereon, Will Beback, WriterHound, Xijiahe, ZeroOne, Zundark, Zvika, 166 anonymous edits

Purification theorem  Source: http://en.wikipedia.org/w/index.php?oldid=422065771  Contributors: Cretog8, J04n, Kzollman, Lionelkarman, Profundity06, Rodii, 7 anonymous edits
Article Sources and Contributors 244

Folk theorem  Source: http://en.wikipedia.org/w/index.php?oldid=462458197  Contributors: Adoniscik, Alberto Chilosi, Bgold, David Eppstein, Gdr, Gt.kls, Jergen, Kjlewis, Kzollman, Michael
Hardy, Paine Ellsworth, Pete.Hurd, Philwelch, Reverend T. R. Malthus, Shenanenigans, Smmurphy, 8 anonymous edits

Revelation principle  Source: http://en.wikipedia.org/w/index.php?oldid=416754409  Contributors: Bmcnamee, Btyner, CRGreathouse, Ccerer, Counterfact, GRBerry, GregorB, Kzollman,
Lingust, Mdmcginn, Ph.eyes, Sander87, Smmurphy, Thijswijs, Volunteer Marek, 12 anonymous edits

Arrow's impossibility theorem  Source: http://en.wikipedia.org/w/index.php?oldid=463843048  Contributors: A poor workman blames, AMH-DS, AaronSw, Abd, Akriasas, Alighodsi2,
Arnob1, Arrrgggument, Ashley Y, AxelBoldt, Bongomatic, Bromskloss, Bryan Derksen, CRGreathouse, Cancan101, CanisRufus, Carl.bunderson, Cconnett, Colignatus, Commadot, D6,
DaGizza, Dan Wylie-Sears 2, DanKeshet, David Eppstein, Dclo, Delikedi, Derek Ross, Dissident, DocGov, Dr. I .D. A. MacIntyre, Draicone, Eberhard Wesche, Edward, Elmju, Enchanter,
Euchrid, Frango com Nata, Gartogg, Geoffrey, Geoffrey.landis, Giftlite, GreatBigCircles, Gregbard, Grick, Guanaco, Gwern, Hairy Dude, HalfDome, Henrygb, Homunq, HorsePunchKid, Icairns,
Infovarius, JRR Trollkien, JRSpriggs, Jdlh, John Quiggin, Joriki, Josh Cherry, Jsnx, KSmrq, Karada, Kelson, Kevin, Khazar, Kukkurovaca, LC, LamilLerran, Laurusnobilis, Liftarn, Liko81,
Lussmu, MarkusSchulze, MartinHarper, Masterpiece2000, Mateo SA, Matt Gies, Matt me, Matthew Woodcraft, Maurreen, Melchoir, Mezzaluna, Miranche, Mousomer, MrOllie, Mschamis,
Natnatonline, Nbarth, NeilTarrant, Neilc, NilEinnoc, Oliphaunt, Osndok, Pace212, Paladinwannabe2, Patrick, Paul Stansifer, PhilipMW, Punctured Bicycle, RDBury, Rangek, Rhobite, Rl,
Rmharman, RobLa, Rspeer, Ruakh, Sararkd, Sbyrnes321, Scott Ritchie, Sdalva, Sf talkative, Shreevatsa, Slipperyweasel, Smmurphy, Solarapex, Spacethingy, Spot, Svick, TUF-KAT, Tbouricius,
Tenmei, Teply, Tesseran, The Anome, Theorist2, Thomasmeeks, Thumperward, Tiger Khan, Tim Ivorson, TittoAssini, Tom harrison, Unweaseler, Vadim Makarov, Villarinho, VoteFair,
Waisbrot, Wclark, Wikiborg, Wikidea, William Avery, Wknight94, X1011, Y2y, Zanaq, Zarvok, Zundark, Zvika, 177 anonymous edits

Tragedy of the commons  Source: http://en.wikipedia.org/w/index.php?oldid=466188961  Contributors: *jb, 64.34.161.xxx, Abu badali, Adamsan, Addshore, Aetheling, Akadruid, Akhil 0950,
Al Lemos, Alan Liefting, Ale jrb, AlecStewart, Alejos, Aliveboy, Amritasenray, Anarcho-capitalism, Angela, AngoraFish, Anlace, Antandrus, Anthony, Ashernm, Ashmoo, AtD, BAxelrod, BIL,
Banaticus, Batmanand, Bcasterline, Belg4mit, BenBildstein, Bender235, Bergsten, Bharral, Bhuston, Bibble Bibble, BigK HeX, Bobby H. Heffley, Bobrayner, Bookgrrl, Brighamd, Brusegadi,
Bruske, Bryan Derksen, Btwied, Burner0718, Byelf2007, Charles Matthews, Chris Howard, ChrisG, Christinebobula, Christofurio, Clarahamster, Cobalty, Colchicum, Colonies Chris,
CompliantDrone, Conversion script, Coolcaesar, Cornell92, Cretog8, Cscoxk, Cshay, Cuddlyable3, DDSaeger, DGreuel, Dale Arnett, Damicatz, Daniel Collins, Debresser, Decumanus,
Dessources, Dicklyon, DirkvdM, Dittaeva, Dj Capricorn, Dogface, DonSiano, Dougher, Dreadstar, Droper, Duncan.Hull, E Pluribus Anthony, EWikist, EagleFan, Echuck215, Ecoevodevo,
Eequor, El C, ElectricRay, Envirocorrector, Environnement2100, Epbr123, Epipelagic, Espoo, Eternal dragon, Etherialemperor, Everything counts, Exander, Fastily, Father Goose, Ferrierd,
Fieldday-sunday, Flammifer, Foobarnix, Francis Davey, Fuhghettaboutit, Fulvio.volpe, GGano, Gamaliel, Gcnak, Glen, Gloriamarie, Gobonobo, GoingBatty, Good Olfactory, Gracenotes,
GreatWhiteNortherner, Gregbard, Grosscha, Grstain, Guaka, Gueneverey, Hadal, Hamiltonian, Haus, Hearlpence, Heavilybroken16, Hillboy, Hmains, Horseinthebucket, Hpvpp, Hroðulf,
Hydrargyrum, Ian Burnet, Inquam, InverseHypercube, Iris lorain, Irockoutalot, Isomorphic, IstvanWolf, J.delanoy, JLaTondre, Jaganath, Jan.chaloupka, JanDeFietser, Jasperdoomen, JayJasper,
Jdperkins, Jmrowland, Joao, Joel Kincaid, John Lynch, John Quiggin, Johnevanperigoe, Jonathanischoice, Joriki, Jpom, Jprupp, Jthillik, Just Another Dan, KAM, Ka Lo 99, Kablammo,
Kai-Hendrik, Kaihsu, Kakoui, Katoa, Kaze0010, Keilana, Kennethmaage, Kevehs, Kevinb9n, Kingturtle, Kiwikrazy6, Knuzeb, Kuru, Larry_Sanger, Lawlorg, Ldrhcp, LeadSongDog, Lendu,
Lentower, Lindosland, Lkri8888, Lllopez66, Logic7, Loodog, Lowellian, Lquilter, Manda.L.Isch, Manukaur, Martey, MartinHarper, Materialscientist, Matt Yeager, MattWright, Megarhyssa,
Membender, Metamagician3000, Mgdurand, Mgiganteus1, Mike Young, Mike18xx, Mindmatrix, MissGlass, Misuki123, Morrem, MotherForker, Mozzerati, Mphasak, MrJones, NKarstens,
Nakon, Nat Krause, Natevw, Nbarth, Nehrams2020, Nick Wilson, NickyMcLean, Nighthawknz, Nikodemos, Nils Simon, Noisy, Nonatus, Northgrove, Nunquam Dormio, Ollie Garkey,
Ontarioboy, Orangeoakland, OwenX, P0lyglut, PGPirate, PMLawrence, Papeschr, Persian Poet Gal, Pestergaines, Pgan002, Phil Holmes, Philanthropologist, Philip Trueman, Phlegat, Pinethicket,
Plumbago, Pmrich, Polentario, Postdlf, Quebec99, R Lowry, R'n'B, REFORMjustice, RJII, Radical Mallard, Radishes, Rbeas, Rchertzy, Rcpink27, Rdrozd, RexNL, Rfolwell, Rich Farmbrough,
Rich Janis, Richard001, RickDC, Risk one, Rjwilmsi, Rmalloy, Rnapier, Robbrown, Roger657, Russelw65, Ryguasu, SMcCandlish, Saanvik, Sabedon, Sadiqsalman, Schmloof, Scientus,
Scjessey, Scott Ritchie, Sean01, Search4Lancer, ShaneCavanaugh, Shidobu, Shorelander, Shrumster, Silence, Sillybilly, SimonP, Sindinero, Sjeng, Slakr, SlimVirgin, Smallman12q, Smmurphy,
Snalwibma, SpK, SpringSloth, SteinbDJ, StevenRay Petersen, Sunray, Tabletop, Tbhotch, Tempshill, Teratornis, The Anome, The wub, Thehotelambush, Theo F, Throwaway85, Time River,
Tktktk, Tom harrison, TomCerul, Torchiest, Tracy2214, Tradimus, Trev M, Trialsanderrors, Troy 07, Twas Now, Tweedy7736, Ultramarine, Undead warrior, UnderHigh, UnitedStatesian,
User2004, Utcursch, Uthbrian, Vidkun, Volunteer Marek, WVhybrid, Waimunyoon, Walshga, Wavelength, Wehpudicabok, Will Beback, William M. Connolley, Wizofaus, Woz2, Wragge,
Writershorse, Ww, YUL89YYZ, Yueni, Zach99998, Zerokitsune, Zodon, Ъыь, 505 anonymous edits

Tyranny of small decisions  Source: http://en.wikipedia.org/w/index.php?oldid=464097939  Contributors: AlexanderKaras, Byelf2007, Charles Matthews, Cretog8, CrusaderForCommonEra,
Denny, Eastlaw, Edward, Epipelagic, Gadget850, Good Olfactory, Guillaume2303, Guslacerda, Hmains, John Darrow, LilHelpa, Mediation4u, Mjpollard, Wareh, Will Beback, Wragge, 4
anonymous edits

All-pay auction  Source: http://en.wikipedia.org/w/index.php?oldid=455033846  Contributors: Amalas, Brian Gunderson, Cretog8, Cronholm144, Edward Curran, GregorB, Haidongwang,
Homunq, Ikiwi, Jmayer, Pete.Hurd, Pingveno, Remember, The Anome, Vlad, Widget90, Wikiofdoom, 10 anonymous edits

List of games in game theory  Source: http://en.wikipedia.org/w/index.php?oldid=443589503  Contributors: Admiller, Ben Standeven, Common Man, Dak, DropDeadGorgias, Eggman64,
George Richard Leeming, Hede2000, JocK, Kumioko, Kzollman, Michael Hardy, Perryar, Phoz, Qblik, Quest for Truth, Robinh, Scorpion451, Smmurphy, Zephyrus67, 25 ,‫ דניאל צבי‬anonymous
edits
Image Sources, Licenses and Contributors 245

Image Sources, Licenses and Contributors


File:JohnvonNeumann-LosAlamos.gif  Source: http://en.wikipedia.org/w/index.php?title=File:JohnvonNeumann-LosAlamos.gif  License: Public Domain  Contributors: LANL
Image:Ultimatum Game Extensive Form.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Ultimatum_Game_Extensive_Form.svg  License: Creative Commons
Attribution-ShareAlike 3.0 Unported  Contributors: Kevin Zollman --Kzollman
Image:Centipede game.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Centipede_game.svg  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors:
MaxDZ8, based on work from Kzollman
Image:PD with outside option.svg  Source: http://en.wikipedia.org/w/index.php?title=File:PD_with_outside_option.svg  License: Creative Commons Attribution-ShareAlike 3.0 Unported
 Contributors: Kevin Zollman --Kzollman
File:Nash graph equilibrium.png  Source: http://en.wikipedia.org/w/index.php?title=File:Nash_graph_equilibrium.png  License: Public Domain  Contributors: Luis von Ahn, Andrew Krieger
File:SGPNEandPlainNE explainingexample.svg  Source: http://en.wikipedia.org/w/index.php?title=File:SGPNEandPlainNE_explainingexample.svg  License: Creative Commons
Attribution-Sharealike 3.0  Contributors: Me, Gillis Danielsen
Image:Battle of the sexes - perfect information.png  Source: http://en.wikipedia.org/w/index.php?title=File:Battle_of_the_sexes_-_perfect_information.png  License: Creative Commons
Attribution-ShareAlike 3.0 Unported  Contributors: Kevin Zollman User:Kzollman
Image:Battle of the sexes - imperfect information.png  Source: http://en.wikipedia.org/w/index.php?title=File:Battle_of_the_sexes_-_imperfect_information.png  License: Creative Commons
Attribution-ShareAlike 3.0 Unported  Contributors: Kevin Zollman User:Kzollman
File:Nuvola-inspired File Icons for MediaWiki-fileicon-doc.png  Source: http://en.wikipedia.org/w/index.php?title=File:Nuvola-inspired_File_Icons_for_MediaWiki-fileicon-doc.png  License:
unknown  Contributors: Michael180
Image:SGPNEandPlainNE explainingexample.svg  Source: http://en.wikipedia.org/w/index.php?title=File:SGPNEandPlainNE_explainingexample.svg  License: Creative Commons
Attribution-Sharealike 3.0  Contributors: Me, Gillis Danielsen
Image:Extensive form game 1.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:Extensive_form_game_1.JPG  License: GNU Free Documentation License  Contributors:
User:Treborbassett
Image:Extensive form game 2.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:Extensive_form_game_2.JPG  License: Public Domain  Contributors: Original uploader was
Treborbassett at en.wikipedia
Image:Extensive form game 3.1.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:Extensive_form_game_3.1.JPG  License: Public Domain  Contributors: Burgundavia, Grenavitar,
Liangent, Luk, Treborbassett, 2 anonymous edits
Image:Extensive form game 4.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:Extensive_form_game_4.JPG  License: Public Domain  Contributors: Original uploader was
Treborbassett at en.wikipedia
Image:Handshake (Workshop Cologne '06).jpeg  Source: http://en.wikipedia.org/w/index.php?title=File:Handshake_(Workshop_Cologne_'06).jpeg  License: Creative Commons
Attribution-ShareAlike 3.0 Unported  Contributors: Amada44, Dbenbenn, Tobias Wolter, 2 anonymous edits
Image:Scale of justice 2.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Scale_of_justice_2.svg  License: Public Domain  Contributors: DTR
Image:Signaling Game.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Signaling_Game.svg  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors: Kevin
Zollman --Kzollman
Image:Stanley Reiter MDdiagram.png  Source: http://en.wikipedia.org/w/index.php?title=File:Stanley_Reiter_MDdiagram.png  License: Public Domain  Contributors: Ogo
Image:Myerson ironing.png  Source: http://en.wikipedia.org/w/index.php?title=File:Myerson_ironing.png  License: Public Domain  Contributors: Ogo
Image:Reaction-correspondence-stag-hunt.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Reaction-correspondence-stag-hunt.jpg  License: GNU Free Documentation License
 Contributors: Bokken, 1 anonymous edits
Image:Reaction-correspondence-hawk-dove.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Reaction-correspondence-hawk-dove.jpg  License: GNU Free Documentation License
 Contributors: Bkell, Bokken
Image:Chicken Two Pop Replicator Dynamics Labeled.png  Source: http://en.wikipedia.org/w/index.php?title=File:Chicken_Two_Pop_Replicator_Dynamics_Labeled.png  License: Creative
Commons Attribution-ShareAlike 3.0 Unported  Contributors: Kevin Zollman Kzollman 07:25, 26 January 2007 (UTC)
Image:Chicken Replicator Dynamics.png  Source: http://en.wikipedia.org/w/index.php?title=File:Chicken_Replicator_Dynamics.png  License: Creative Commons Attribution-ShareAlike 3.0
Unported  Contributors: Kevin Zollman --Kzollman 05:54, 29 January 2007 (UTC)
Image:Centipede game.png  Source: http://en.wikipedia.org/w/index.php?title=File:Centipede_game.png  License: Creative Commons Attribution-ShareAlike 3.0 Unported  Contributors:
EugeneZelenko, Kzollman, MaxDZ8
Image:Nature and Appearance of Deer and how they can be hunted with Dogs Fac simile of a Miniature in the Livre du Roy Modus Manuscript of the Fourteenth Century National
Library of Paris.png  Source:
http://en.wikipedia.org/w/index.php?title=File:Nature_and_Appearance_of_Deer_and_how_they_can_be_hunted_with_Dogs_Fac_simile_of_a_Miniature_in_the_Livre_du_Roy_Modus_Manuscript_of_the_Fourteenth_Ce
 License: Public Domain  Contributors: Ignacio Icke, Jastrow, Kilom691, MU, Makthorpe, Shakko
Image:Reaction-correspondence-matching-pennies.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Reaction-correspondence-matching-pennies.jpg  License: GNU Free
Documentation License  Contributors: Bokken
File:Rock paper scissors.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Rock_paper_scissors.jpg  License: GNU Free Documentation License  Contributors: Bkell, Heiko,
Magasjukur2, Nyttend, Vix929, 1 anonymous edits
Image:Rock-paper-scissors (rock).png  Source: http://en.wikipedia.org/w/index.php?title=File:Rock-paper-scissors_(rock).png  License: Creative Commons Attribution-Sharealike 3.0
 Contributors: Sertion
Image:Rock-paper-scissors (paper).png  Source: http://en.wikipedia.org/w/index.php?title=File:Rock-paper-scissors_(paper).png  License: Creative Commons Attribution-Sharealike 3.0
 Contributors: Sertion
Image:Rock-paper-scissors (scissors).png  Source: http://en.wikipedia.org/w/index.php?title=File:Rock-paper-scissors_(scissors).png  License: Creative Commons Attribution-Sharealike 3.0
 Contributors: Sertion
Image:Rock Paper Scissors Lizard Spock en.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Rock_Paper_Scissors_Lizard_Spock_en.svg  License: Public Domain  Contributors:
VidTheKid
Image:Pierre ciseaux feuille lézard spock aligned.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Pierre_ciseaux_feuille_lézard_spock_aligned.svg  License: Creative Commons
Attribution-Sharealike 3.0,2.5,2.0,1.0  Contributors: DMacks (talk)
Image:Paul Cézanne 076.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Paul_Cézanne_076.jpg  License: Public Domain  Contributors: Bukk, EDUCA33E, Jmdesbois, Kairios,
Olivier2
File:rps2010 2.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Rps2010_2.jpg  License: Public Domain  Contributors: james bamber
File:Flag of England.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Flag_of_England.svg  License: Public Domain  Contributors: Anomie
File:Flag of the United States.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Flag_of_the_United_States.svg  License: Public Domain  Contributors: Anomie
File:Flag of Australia.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Flag_of_Australia.svg  License: Public Domain  Contributors: Anomie, Mifter
File:Flag of Norway.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Flag_of_Norway.svg  License: Public Domain  Contributors: Dbenbenn
File:Flag of Ireland.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Flag_of_Ireland.svg  License: Public Domain  Contributors: User:SKopp
File:Flag of Canada.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Flag_of_Canada.svg  License: Public Domain  Contributors: Anomie
File:Flag of South Africa.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Flag_of_South_Africa.svg  License: unknown  Contributors: Adriaan, Anime Addict AA, AnonMoos,
BRUTE, Daemonic Kangaroo, Dnik, Duduziq, Dzordzm, Fry1989, Homo lupus, Jappalang, Juliancolton, Kam Solusar, Klemen Kocjancic, Klymene, Lexxyy, Mahahahaneapneap, Manuelt15,
Moviedefender, NeverDoING, Ninane, Poznaniak, Przemub, SKopp, ThePCKid, ThomasPusch, Tvdm, Ultratomio, Vzb83, Zscout370, 34 anonymous edits
File:Flag of Ghana.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Flag_of_Ghana.svg  License: Public Domain  Contributors: Benchill, Fry1989, Henswick, Homo lupus,
Indolences, Jarekt, Klemen Kocjancic, Neq00, OAlexander, SKopp, ThomasPusch, Threecharlie, Torstein, Zscout370, 5 anonymous edits
Image Sources, Licenses and Contributors 246

Image:Pyle pirates burying2.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Pyle_pirates_burying2.jpg  License: Public Domain  Contributors: Beej71, BrokenSphere, Jappalang,
Mattes, Quibik, Wolfmann, 1 anonymous edits
Image:El Farol Restaurant and Cantina, Santa Fe NM.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:El_Farol_Restaurant_and_Cantina,_Santa_Fe_NM.jpg  License: Creative
Commons Attribution 3.0  Contributors: John Phelan
Image:Berlin Blockade-map.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Berlin_Blockade-map.svg  License: Creative Commons Attribution-ShareAlike 3.0 Unported
 Contributors: historicair 23:55, 11 September 2007 (UTC)
Image:economics cournot diag1 svg.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Economics_cournot_diag1_svg.svg  License: Public Domain  Contributors: Twisp,Bluemoose
Image:economics cournot diag2 svg.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Economics_cournot_diag2_svg.svg  License: Public Domain  Contributors: Twisp,Bluemoose
Image:economics cournot diag3 svg.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Economics_cournot_diag3_svg.svg  License: Public Domain  Contributors: Twisp,Bluemoose
Image:economics cournot diag4 svg.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Economics_cournot_diag4_svg.svg  License: Public Domain  Contributors: Twisp,Bluemoose
Image:Minimax.svg  Source: http://en.wikipedia.org/w/index.php?title=File:Minimax.svg  License: Creative Commons Attribution-Sharealike 2.5  Contributors: Nuno Nogueira (Nmnogueira)
File:Cows on Selsley Common - geograph.org.uk - 192472.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Cows_on_Selsley_Common_-_geograph.org.uk_-_192472.jpg  License:
Creative Commons Attribution-Share Alike 2.0 Generic  Contributors: Lamiot
File:Lacanja burn.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:Lacanja_burn.JPG  License: Public domain  Contributors: Jami Dwyer
File:Twin Glens abutment.jpg  Source: http://en.wikipedia.org/w/index.php?title=File:Twin_Glens_abutment.jpg  License: Creative Commons Attribution-Share Alike  Contributors: Choess
File:Jones River marshland near mouth.JPG  Source: http://en.wikipedia.org/w/index.php?title=File:Jones_River_marshland_near_mouth.JPG  License: Creative Commons
Attribution-Sharealike 3.0,2.5,2.0,1.0  Contributors: OldPine
License 247

License
Creative Commons Attribution-Share Alike 3.0 Unported
//creativecommons.org/licenses/by-sa/3.0/

S-ar putea să vă placă și