Sunteți pe pagina 1din 13

Prisoners dilemma

For the parole deal in prisons, see Innocent prisoners in prison (and vice versa)
dilemma.
The prisoners dilemma is a standard example of a
If A and B both remain silent, both of
them will only serve 1 year in prison (on
the lesser charge)

It is implied that the prisoners will have no opportunity


to reward or punish their partner other than the prison
sentences they get, and that their decision will not aect
their reputation in the future. Because betraying a part-
ner oers a greater reward than cooperating with them,
all purely rational self-interested prisoners would betray
the other, and so the only possible outcome for two purely
rational prisoners is for them to betray each other.[1] The
interesting part of this result is that pursuing individual re-
ward logically leads both of the prisoners to betray, when
they would get a better reward if they both kept silent.
In reality, humans display a systemic bias towards coop-
erative behavior in this and similar games, much more
Prisoners dilemma payo matrix so than predicted by simple models of rational self-
interested action.[2][3][4][5] A model based on a dierent
game analyzed in game theory that shows why two com- kind of rationality, where people forecast how the game
pletely rational individuals might not cooperate, even if would be played if they formed coalitions and then max-
it appears that it is in their best interests to do so. It was imized their forecasts, has been shown to make better
originally framed by Merrill Flood and Melvin Dresher predictions of the rate of cooperation in this and similar
working at RAND in 1950. Albert W. Tucker formal- games, given only the payos of the game.[6]
ized the game with prison sentence rewards and named
it, prisoners dilemma (Poundstone, 1992), presenting An extended iterated version of the game also exists,
it as follows: where the classic game is played repeatedly between the
same prisoners, and consequently, both prisoners contin-
uously have an opportunity to penalize the other for pre-
Two members of a criminal gang are arrested
vious decisions. If the number of times the game will
and imprisoned. Each prisoner is in solitary
be played is known to the players, then (by backward in-
connement with no means of communicating
duction) two classically rational players will betray each
with the other. The prosecutors lack sucient
other repeatedly, for the same reasons as the single-shot
evidence to convict the pair on the principal
variant. In an innite or unknown length game there is no
charge. They hope to get both sentenced to
xed optimum strategy, and Prisoners Dilemma tourna-
a year in prison on a lesser charge. Simulta-
ments have been held to compete and test algorithms.[7]
neously, the prosecutors oer each prisoner a
bargain. Each prisoner is given the opportu- The prisoners dilemma game can be used as a model
nity either to: betray the other by testifying that for many real world situations involving cooperative be-
the other committed the crime, or to cooperate haviour. In casual usage, the label prisoners dilemma
with the other by remaining silent. The oer may be applied to situations not strictly matching the for-
is: mal criteria of the classic or iterative games: for instance,
those in which two entities could gain important benets
If A and B each betray the other, each of from cooperating or suer from the failure to do so, but
them serves 2 years in prison nd it merely dicult or expensive, not necessarily im-
If A betrays B but B remains silent, A possible, to coordinate their activities to achieve cooper-
will be set free and B will serve 3 years ation.

1
2 3 THE ITERATED PRISONERS DILEMMA

1 Strategy for the prisoners 2.1 Special case: Donation game


dilemma The donation game[8] is a form of prisoners dilemma
in which cooperation corresponds to oering the other
Both cannot communicate, they are separated in two in- player a benet b at a personal cost c with b > c. Defec-
dividual rooms. The normal game is shown below: tion means oering nothing. The payo matrix is thus
It is assumed that both understand the nature of the game, Note that 2R>T+S (i.e. 2(b-c)>b-c) which qualies the
and that despite being members of the same gang, they donation game to be an iterated game (see next section).
have no loyalty to each other and will have no opportunity The donation game may be applied to markets. Suppose
for retribution or reward outside the game. Regardless of X grows oranges, Y grows apples. The marginal utility
what the other decides, each prisoner gets a higher re- of an apple to the orange-grower X is b, which is higher
ward by betraying the other (defecting). The reasoning than the marginal utility (c) of an orange, since X has a
involves an argument by dilemma: B will either cooper- surplus of oranges and no apples. Similarly, for apple-
ate or defect. If B cooperates, A should defect, because grower Y, the marginal utility of an orange is b while the
going free is better than serving 1 year. If B defects, A marginal utility of an apple is c. If X and Y contract to
should also defect, because serving 2 years is better than exchange an apple and an orange, and each fullls their
serving 3. So either way, A should defect. Parallel rea- end of the deal, then each receive a payo of b-c. If one
soning will show that B should defect. defects and does not deliver as promised, the defector
Because defection always results in a better payo than will receive a payo of b, while the cooperator will lose
cooperation, regardless of the other players choice, it is a c. If both defect, then neither one gains or loses anything.
dominant strategy. Mutual defection is the only strong
Nash equilibrium in the game (i.e. the only outcome
from which each player could only do worse by unilat-
erally changing strategy). The dilemma then is that mu-
3 The iterated prisoners dilemma
tual cooperation yields a better outcome than mutual de-
fection but it is not the rational outcome because from a If two players play prisoners dilemma more than once in
self-interested perspective, the choice to cooperate, at the succession and they remember previous actions of their
individual level, is irrational. opponent and change their strategy accordingly, the game
is called iterated prisoners dilemma.
In addition to the general form above, the iterative ver-
sion also requires that 2R > T + S, to prevent alternating
cooperation and defection giving a greater reward than
2 Generalized form mutual cooperation.
The iterated prisoners dilemma game is fundamental to
The structure of the traditional Prisoners Dilemma can some theories of human cooperation and trust. On the as-
be generalized from its original prisoner setting. Supposesumption that the game can model transactions between
that the two players are represented by the colors, red and
two people requiring trust, cooperative behaviour in pop-
blue, and that each player chooses to either Cooperate ulations may be modeled by a multi-player, iterated, ver-
or Defect. sion of the game. It has, consequently, fascinated many
If both players cooperate, they both receive the reward scholars over the years. In 1975, Grofman and Pool esti-
R for cooperating. If both players defect, they both re- mated the count of scholarly articles devoted to it at over
ceive the punishment payo P. If Blue defects while Red 2,000. The iterated prisoners dilemma has also been re-
[9]
cooperates, then Blue receives the temptation payo T, ferred to as the "Peace-War game".
while Red receives the suckers payo, S. Similarly, if If the game is played exactly N times and both players
Blue cooperates while Red defects, then Blue receives the know this, then it is always game theoretically optimal to
suckers payo S, while Red receives the temptation pay- defect in all rounds. The only possible Nash equilibrium
o T. is to always defect. The proof is inductive: one might as
This can be expressed in normal form: well defect on the last turn, since the opponent will not
have a chance to later retaliate. Therefore, both will de-
and to be a prisoners dilemma game in the strong sense, fect on the last turn. Thus, the player might as well defect
the following condition must hold for the payos: on the second-to-last turn, since the opponent will defect
T >R>P>S on the last no matter what is done, and so on. The same
The payo relationship R > P implies that mutual coop- applies if the game length is unknown but has a known
eration is superior to mutual defection, while the payo upper limit.
relationships T > R and P > S imply that defection is the Unlike the standard prisoners dilemma, in the iterated
dominant strategy for both agents. prisoners dilemma the defection strategy is counter-
3.1 Strategy for the iterated prisoners dilemma 3

intuitive and fails badly to predict the behavior of human Nice The most important condition is that the strategy
players. Within standard economic theory, though, this must be nice, that is, it will not defect before
is the only correct answer. The superrational strategy in its opponent does (this is sometimes referred to as
the iterated prisoners dilemma with xed N is to coop- an optimistic algorithm). Almost all of the top-
erate against a superrational opponent, and in the limit scoring strategies were nice; therefore, a purely self-
of large N, experimental results on strategies agree with ish strategy will not cheat on its opponent, for
the superrational version, not the game-theoretic rational purely self-interested reasons rst.
one.
Retaliating However, Axelrod contended, the success-
For cooperation to emerge between game theoretic ratio- ful strategy must not be a blind optimist. It
nal players, the total number of rounds N must be ran- must sometimes retaliate. An example of a non-
dom, or at least unknown to the players. In this case 'al- retaliating strategy is Always Cooperate. This is a
ways defect' may no longer be a strictly dominant strat- very bad choice, as nasty strategies will ruthlessly
egy, only a Nash equilibrium. Amongst results shown by exploit such players.
Robert Aumann in a 1959 paper, rational players repeat-
edly interacting for indenitely long games can sustain Forgiving Successful strategies must also be forgiving.
the cooperative outcome. Though players will retaliate, they will once again
fall back to cooperating if the opponent does not
continue to defect. This stops long runs of revenge
and counter-revenge, maximizing points.
3.1 Strategy for the iterated prisoners
dilemma Non-envious The last quality is being non-envious, that
is not striving to score more than the opponent.
Interest in the iterated prisoners dilemma (IPD) was kin-
dled by Robert Axelrod in his book The Evolution of Co- The optimal (points-maximizing) strategy for the one-
operation (1984). In it he reports on a tournament he or- time PD game is simply defection; as explained above,
ganized of the N step prisoners dilemma (with N xed) this is true whatever the composition of opponents may
in which participants have to choose their mutual strat- be. However, in the iterated-PD game the optimal strat-
egy again and again, and have memory of their previous egy depends upon the strategies of likely opponents, and
encounters. Axelrod invited academic colleagues all over how they will react to defections and cooperations. For
the world to devise computer strategies to compete in an example, consider a population where everyone defects
IPD tournament. The programs that were entered varied every time, except for a single individual following the tit
widely in algorithmic complexity, initial hostility, capac- for tat strategy. That individual is at a slight disadvantage
ity for forgiveness, and so forth. because of the loss on the rst turn. In such a population,
Axelrod discovered that when these encounters were re- the optimal strategy for that individual is to defect every
peated over a long period of time with many players, each time. In a population with a certain percentage of always-
with dierent strategies, greedy strategies tended to do defectors and the rest being tit for tat players, the optimal
very poorly in the long run while more altruistic strategies strategy for an individual depends on the percentage, and
did better, as judged purely by self-interest. He used this on the length of the game.
to show a possible mechanism for the evolution of altru- In the strategy called Pavlov, win-stay, lose-switch, If
istic behaviour from mechanisms that are initially purely the last round outcome was P,P, a Pavlov player switches
selsh, by natural selection. strategy the next turn, which means P,P would be consid-
The winning deterministic strategy was tit for tat, which ered as a failure to cooperate. For a certain range of pa-
Anatol Rapoport developed and entered into the tourna- rameters, Pavlov beats all other strategies by giving pref-
ment. It was the simplest of any program entered, con- erential treatment to co-players which resemble Pavlov.
taining only four lines of BASIC, and won the contest. Deriving the optimal strategy is generally done in two
The strategy is simply to cooperate on the rst iteration ways:
of the game; after that, the player does what his or her op-
ponent did on the previous move. Depending on the situ-
ation, a slightly better strategy can be tit for tat with for- 1. Bayesian Nash Equilibrium: If the statistical dis-
giveness. When the opponent defects, on the next move, tribution of opposing strategies can be determined
the player sometimes cooperates anyway, with a small (e.g. 50% tit for tat, 50% always cooperate) an opti-
probability (around 15%). This allows for occasional re- mal counter-strategy can be derived analytically.[10]
covery from getting trapped in a cycle of defections. The 2. Monte Carlo simulations of populations have been
exact probability depends on the line-up of opponents. made, where individuals with low scores die o, and
By analysing the top-scoring strategies, Axelrod stated those with high scores reproduce (a genetic algo-
several conditions necessary for a strategy to be success- rithm for nding an optimal strategy). The mix of
ful. algorithms in the nal population generally depends
4 3 THE ITERATED PRISONERS DILEMMA

on the mix in the initial population. The introduc- probabilities.[13] In an encounter between player X and
tion of mutation (random variation during reproduc- player Y, X 's strategy is specied by a set of probabilities
tion) lessens the dependency on the initial popula- P of cooperating with Y. P is a function of the outcomes
tion; empirical experiments with such systems tend of their previous encounters or some subset thereof. If
to produce tit for tat players (see for instance Chess P is a function of only their most recent n encounters,
1988), but no analytic proof exists that this will al- it is called a memory-n strategy. A memory-1 strat-
ways occur. egy is then specied by four cooperation probabilities:
P = {Pcc , Pcd , Pdc , Pdd } , where Pab is the probability
Although tit for tat is considered to be the most robust that X will cooperate in the present encounter given that
basic strategy, a team from Southampton University in the previous encounter was characterized by (ab). For
England (led by Professor Nicholas Jennings and consist- example, if the previous encounter was one in which X
ing of Rajdeep Dash, Sarvapali Ramchurn, Alex Rogers, cooperated and Y defected, then Pcd is the probability
Perukrishnen Vytelingum) introduced a new strategy at that X will cooperate in the present encounter. If each of
the 20th-anniversary iterated prisoners dilemma compe- the probabilities are either 1 or 0, the strategy is called de-
tition, which proved to be more successful than tit for terministic. An example of a deterministic strategy is the
tat. This strategy relied on collusion between programs to "tit for tat" strategy written as P={1,0,1,0}, in which X re-
achieve the highest number of points for a single program. sponds as Y did in the previous encounter. Another is the
The university submitted 60 programs to the competition, winstay, loseswitch strategy written as P={1,0,0,1}, in
which were designed to recognize each other through a se- which X responds as in the previous encounter, if it was
ries of ve to ten moves at the start.[11] Once this recog- a win (i.e. cc or dc) but changes strategy if it was
nition was made, one program would always cooperate a loss (i.e. cd or dd). It has been shown that for any
and the other would always defect, assuring the maximum memory-n strategy there is a corresponding memory-1
number of points for the defector. If the program realized strategy which gives the same statistical results, so that
that it was playing a non-Southampton player, it would only memory-1 strategies need be considered.[13]
continuously defect in an attempt to minimize the score If we dene P as the above 4-element strategy vector of
of the competing program. As a result,[12] this strategy X and Q = {Qcc , Qcd , Qdc , Qdd } as the 4-element strat-
ended up taking the top three positions in the competi- egy vector of Y, a transition matrix M may be dened for
tion, as well as a number of positions towards the bottom. X whose ij th entry is the probability that the outcome
This strategy takes advantage of the fact that multiple of a particular encounter between X and Y will be j given
entries were allowed in this particular competition and that the previous encounter was i, where i and j are one of
that the performance of a team was measured by that of the four outcome indices: cc, cd, dc, or dd. For example,
the highest-scoring player (meaning that the use of self- from X 's point of view, the probability that the outcome
sacricing players was a form of minmaxing). In a com- of the present encounter is cd given that the previous en-
petition where one has control of only a single player, tit counter was cd is equal to Mcd,cd = Pcd (1 Qdc ) .
for tat is certainly a better strategy. Because of this new (Note that the indices for Q are from Y 's point of view:
rule, this competition also has little theoretical signi- a cd outcome for X is a dc outcome for Y.) Under these
cance when analysing single agent strategies as compared denitions, the iterated prisoners dilemma qualies as a
to Axelrods seminal tournament. However, it provided a stochastic process and M is a stochastic matrix, allowing
basis for analysing how to achieve cooperative strategies all of the theory of stochastic processes to be applied.[13]
in multi-agent frameworks, especially in the presence of
One result of stochastic theory is that there exists a sta-
noise. In fact, long before this new-rules tournament was
tionary vector v for the matrix M such that v M = v
played, Richard Dawkins in his book The Selsh Gene . Without loss of generality, it may be specied that v
pointed out the possibility of such strategies winning if
is normalized so that the sum of its four components is
multiple entries were allowed, but he remarked that most
unity. The ij th entry in M n will give the probability that
probably Axelrod would not have allowed them if theythe outcome of an encounter between X and Y will be
had been submitted. It also relies on circumventing rules
j given that the encounter n steps previous is i. In the
about the prisoners dilemma in that there is no com-
limit as n approaches innity, M will converge to a ma-
munication allowed between the two players, which the
trix with xed values, giving the long-term probabilities
Southampton programs arguably did with their openingof an encounter producing j which will be independent
of i. In other words, the rows of M will be identical,
ten move dance to recognize one another; this only re-
inforces just how valuable communication can be in shift-
giving the long-term equilibrium result probabilities of
ing the balance of the game. the iterated prisoners dilemma without the need to ex-
plicitly evaluate a large number of interactions. It can
be seen that v is a stationary vector for M n and particu-
3.2 Stochastic iterated prisoners dilemma larly M , so that each row of M will be equal to v.
Thus the stationary vector species the equilibrium out-
In a stochastic iterated prisoners dilemma game, come probabilities for X. Dening Sx = {R, S, T, P }
strategies are specied by in terms of cooperation
3.2 Stochastic iterated prisoners dilemma 5

and Sy = {R, T, S, P } as the short-term payo vectors hurt himself by getting lower payo. Thus, extortion so-
for the {cc,cd,dc,dd} outcomes (From X 's point of view), lutions turn the iterated prisoners dilemma into a sort of
the equilibrium payos for X and Y can now be specied ultimatum game. Specically, X is able to choose a strat-
as sx = vSx and sy = vSy , allowing the two strategies egy for which D(P, Q, Sy + U ) = 0 , unilaterally set-
P and Q to be compared for their long term payos. ting sy to a specic value within a particular range of val-
ues, independent of Y 's strategy, oering an opportunity
for X to extort player Y (and vice versa). (It turns out
3.2.1 Zero-determinant strategies that if X tries to set sx to a particular value, the range of
possibilities is much smaller, only consisting of complete
cooperation or complete defection.[13] )
An extension of the IPD is an evolutionary stochastic
IPD, in which the relative abundance of particular strate-
gies is allowed to change, with more successful strategies
relatively increasing. This process may be accomplished
by having less successful players imitate the more suc-
cessful strategies, or by eliminating less successful play-
ers from the game, while multiplying the more successful
ones. It has been shown that unfair ZD strategies are not
evolutionarily stable. The key intuition is that an evolu-
tionarily stable strategy must not only be able to invade
another population (which extortionary ZD strategies can
do) but must also perform well against other players of
The relationship between zero-determinant (ZD), cooperating the same type (which extortionary ZD players do poorly,
and defecting strategies in the Iterated Prisoners Dilemma (IPD). because they reduce each others surplus).[14]
Cooperating strategies always cooperate with other cooperating
strategies, and defecting strategies always defect against other Theory and simulations conrm that beyond a critical
defecting strategies. Both contain subsets of strategies that are population size, ZD extortion loses out in evolutionary
robust under strong selection, meaning no other memory-1 strat- competition against more cooperative strategies, and as
egy is selected to invade such strategies when they are resident in a result, the average payo in the population increases
a population. Only cooperating strategies contain a subset that when the population is bigger. In addition, there are some
are always robust, meaning that no other memory-1 strategy is cases in which extortioners may even catalyze coopera-
selected to invade and replace such strategies, under both strong tion by helping to break out of a face-o between uniform
and weak selection. The intersection between ZD and good co-
defectors and winstay, loseswitch agents.[8]
operating strategies is the set of generous ZD strategies. Extortion
strategies are the intersection between ZD and non-robust defect- While extortionary ZD strategies are not stable in large
ing strategies. Tit-for-tat lies at the intersection of cooperating, populations, another ZD class called generous strate-
defecting and ZD strategies. gies is both stable and robust. In fact, when the popu-
lation is not too small, these strategies can supplant any
In 2012, William H. Press and Freeman Dyson pub- other ZD strategy and even perform well against a broad
lished a new class of strategies for the stochastic iter- array of generic strategies for iterated prisoners dilemma,
ated prisoners dilemma called zero-determinant (ZD) including winstay, loseswitch. This was proven specif-
strategies.[13] The long term payos for encounters be- ically for the donation game by Alexander Stewart and
tween X and Y can be expressed as the determinant of Joshua Plotkin in 2013.[15] Generous strategies will co-
a matrix which is a function of the two strategies and operate with other cooperative players, and in the face
the short term payo vectors: sx = D(P, Q, Sx ) and of defection, the generous player loses more utility than
sy = D(P, Q, Sy ) , which do not involve the station- its rival. Generous strategies are the intersection of ZD
ary vector v. Since the determinant function sy = strategies and so-called good strategies, which were de-
D(P, Q, f ) is linear in f, it follows that sx + sy + = ned by Akin (2013)[16] to be those for which the player
D(P, Q, Sx + Sy + U ) (where U={1,1,1,1}). Any responds to past mutual cooperation with future cooper-
strategies for which D(P, Q, Sx +Sy +U ) = 0 is by ation and splits expected payos equally if he receives
denition a ZD strategy, and the long term payos obey at least the cooperative expected payo. Among good
the relation sx + sy + = 0 . strategies, the generous (ZD) subset performs well when
Tit-for-tat is a ZD strategy which is fair in the sense the population is not too small. If the population is very
of not gaining advantage over the other player. However, small, defection strategies tend to dominate.[15]
the ZD space also contains strategies that, in the case of
two players, can allow one player to unilaterally set the
other players score or alternatively, force an evolutionary
player to achieve a payo some percentage lower than his
own. The extorted player could defect but would thereby
6 4 REAL-LIFE EXAMPLES

3.3 Continuous iterated prisoners abstracted into models in which living beings are engaged
dilemma in endless games of prisoners dilemma. This wide appli-
cability of the PD gives the game its substantial impor-
Most work on the iterated prisoners dilemma has fo- tance.
cused on the discrete case, in which players either co-
operate or defect, because this model is relatively sim-
ple to analyze. However, some researchers have looked 4.1 In environmental studies
at models of the continuous iterated prisoners dilemma,
in which players are able to make a variable contribu- In environmental studies, the PD is evident in crises such
tion to the other player. Le and Boyd[17] found that as global climate-change. It is argued all countries will
in such situations, cooperation is much harder to evolve benet from a stable climate, but any single country is of-
than in the discrete iterated prisoners dilemma. The ba- ten hesitant to curb CO2 emissions. The immediate ben-
sic intuition for this result is straightforward: in a con- et to any one country from maintaining current behav-
tinuous prisoners dilemma, if a population starts o ior is wrongly perceived to be greater than the purported
in a non-cooperative equilibrium, players who are only eventual benet to that country if all countries behavior
marginally more cooperative than non-cooperators get was changed, therefore explaining the impasse concern-
little benet from assorting with one another. By con- ing climate-change in 2007.[21]
trast, in a discrete prisoners dilemma, tit for tat cooper-
An important dierence between climate-change politics
ators get a big payo boost from assorting with one an-
and the prisoners dilemma is uncertainty; the extent and
other in a non-cooperative equilibrium, relative to non-
pace at which pollution can change climate is not known.
cooperators. Since nature arguably oers more oppor-
The dilemma faced by government is therefore dierent
tunities for variable cooperation rather than a strict di-
from the prisoners dilemma in that the payos of coop-
chotomy of cooperation or defection, the continuous pris-
eration are unknown. This dierence suggests that states
oners dilemma may help explain why real-life examples
will cooperate much less than in a real iterated prisoners
of tit for tat-like cooperation are extremely rare in nature
[18] dilemma, so that the probability of avoiding a possible cli-
(ex. Hammerstein ) even though tit for tat seems robust
mate catastrophe is much smaller than that suggested by
in theoretical models.
a game-theoretical analysis of the situation using a real
iterated prisoners dilemma.[22]
3.4 Emergence of Stable Strategies Osang and Nandy provide a theoretical explanation with
proofs for a regulation-driven win-win situation along the
Players cannot seem to coordinate mutual cooperation, lines of Michael Porter's hypothesis, in which government
[23]
thus often get locked into the inferior yet stable strategy of regulation of competing rms is substantial.
defection. In this way, iterated rounds facilitate the evolu-
tion of stable strategies.[19] Iterated rounds often produce
novel strategies, which have implications to complex so- 4.2 In animals
cial interaction. One such strategy is win-stay lose-shift.
This strategy outperforms a simple Tit-For-Tat strategy Cooperative behavior of many animals can be under-
- that is, if you can get away with cheating, repeat that stood as an example of the prisoners dilemma. Often
behavior, however if you get caught, switch.[20] animals engage in long term partnerships, which can be
The only problem of this tit-for-tat strategy is that they more specically modeled as iterated prisoners dilemma.
are vulnerable to signal error. The problem arises when For example, guppies inspect predators cooperatively in
one individual shows cooperative behavior but the other groups, and they are thought to punish non-cooperative
interprets it as cheating. As a result of this, the second inspectors.
individual now cheats and then it starts a see-saw pattern Vampire bats are social animals that engage in reciprocal
of cheating in a chain reaction. food exchange. Applying the payos from the prisoners
dilemma can help explain this behavior:[24]

4 Real-life examples C/C: Reward: I get blood on my unlucky nights,


which saves me from starving. I have to give blood
The prisoner setting may seem contrived, but there are in on my lucky nights, which doesn't cost me too
fact many examples in human interaction as well as inter- much.
actions in nature that have the same payo matrix. The
prisoners dilemma is therefore of interest to the social D/C: Temptation: You save my life on my poor
sciences such as economics, politics, and sociology, as night. But then I get the added benet of not hav-
well as to the biological sciences such as ethology and ing to pay the slight cost of feeding you on my good
evolutionary biology. Many natural processes have been night.
4.5 In sport 7

C/D: Suckers Payo: I pay the cost of saving your makes it slightly dierent from a prisoners dilemma. The
life on my good night. But on my bad night you don't outcome is similar, though, in that both rms would be
feed me and I run a real risk of starving to death. better o were they to advertise less than in the equi-
librium. Sometimes cooperative behaviors do emerge in
D/D: Punishment: I don't have to pay the slight business situations. For instance, cigarette manufactur-
costs of feeding you on my good nights. But I run a ers endorsed the making of laws banning cigarette ad-
real risk of starving on my poor nights. vertising, understanding that this would reduce costs and
increase prots across the industry.[26] This analysis is
4.3 In psychology likely to be pertinent in many other business situations
involving advertising.
In addiction research / behavioral economics, George Without enforceable agreements, members of a cartel are
Ainslie points out[25] that addiction can be cast as an in- also involved in a (multi-player) prisoners dilemma.[27]
tertemporal PD problem between the present and future 'Cooperating' typically means keeping prices at a pre-
selves of the addict. In this case, defecting means relaps- agreed minimum level. 'Defecting' means selling under
ing, and it is easy to see that not defecting both today this minimum level, instantly taking business (and prof-
and in the future is by far the best outcome. The case its) from other cartel members. Anti-trust authorities
where one abstains today but relapses in the future is the want potential cartel members to mutually defect, ensur-
worst outcome in some sense the discipline and self- ing the lowest possible prices for consumers.
sacrice involved in abstaining today have been wasted
because the future relapse means that the addict is right
back where he started and will have to start over (which 4.5 In sport
is quite demoralizing, and makes starting over more di-
cult). Relapsing today and tomorrow is a slightly better Doping in sport has been cited as an example of a pris-
outcome, because while the addict is still addicted, they oners dilemma.[28]
haven't put the eort in to trying to stop. The nal case,
where one engages in the addictive behavior today while Two competing athletes have the option to use an ille-
abstaining tomorrow will be familiar to anyone who has gal and/or dangerous drug to boost their performance. If
struggled with an addiction. The problem here is that (as neither athlete takes the drug, then neither gains an ad-
in other PDs) there is an obvious benet to defecting to- vantage. If only one does, then that athlete gains a sig-
day, but tomorrow one will face the same PD, and the nicant advantage over their competitor, reduced by the
same obvious benet will be present then, ultimately lead- legal and/or medical dangers of having taken the drug. If
ing to an endless string of defections. both athletes take the drug, however, the benets cancel
out and only the dangers remain, putting them both in a
John Gottman in his research described in the science worse position than if neither had used doping.[28]
of trust denes good relationships as those where part-
ners know not to enter the (D,D) cell or at least not to get
dynamically stuck there in a loop. 4.6 Multiplayer dilemmas

Many real-life dilemmas involve multiple players.[29] Al-


4.4 In economics
though metaphorical, Hardins tragedy of the commons
Advertising is sometimes cited as a real-example of the may be viewed as an example of a multi-player general-
prisoners dilemma. When cigarette advertising was le- ization of the PD: Each villager makes a choice for per-
gal in the United States, competing cigarette manufactur- sonal gain or restraint. The collective reward for unan-
ers had to decide how much money to spend on advertis- imous (or even frequent) defection is very low payos
ing. The eectiveness of Firm As advertising was par- (representing the destruction of the commons). A com-
tially determined by the advertising conducted by Firm mons dilemma most people can relate to is washing the
B. Likewise, the prot derived from advertising for Firm dishes in a shared house. By not washing dishes an indi-
B is aected by the advertising conducted by Firm A. vidual can gain by saving his time, but if that behavior is
If both Firm A and Firm B chose to advertise during a adopted by every resident the collective cost is no clean
given period, then the advertising cancels out, receipts plates for anyone.
remain constant, and expenses increase due to the cost The commons are not always exploited: William Pound-
of advertising. Both rms would benet from a reduc- stone, in a book about the prisoners dilemma (see Refer-
tion in advertising. However, should Firm B choose not ences below), describes a situation in New Zealand where
to advertise, Firm A could benet greatly by advertis- newspaper boxes are left unlocked. It is possible for peo-
ing. Nevertheless, the optimal amount of advertising by ple to take a paper without paying (defecting) but very
one rm depends on how much advertising the other un- few do, feeling that if they do not pay then neither will
dertakes. As the best strategy is dependent on what the others, destroying the system. Subsequent research by
other rm chooses there is no dominant strategy, which Elinor Ostrom, winner of the 2009 Sveriges Riksbank
8 5 RELATED GAMES

Prize in Economic Sciences in Memory of Alfred Nobel,


hypothesized that the tragedy of the commons is oversim-
plied, with the negative outcome inuenced by outside
inuences. Without complicating pressures, groups com-
municate and manage the commons among themselves
for their mutual benet, enforcing social norms to pre-
serve the resource and achieve the maximum good for
the group, an example of eecting the best case outcome
for PD.[30]

4.7 In international politics


In international political theory, the Prisoners Dilemma
is often used to demonstrate the coherence of strategic
realism, which holds that in international relations, all
states (regardless of their internal policies or professed
ideology), will act in their rational self-interest given
international anarchy. A classic example is an arms
race like the Cold War and similar conicts.[31] During The prisoners dilemma as a briefcase exchange
the Cold War the opposing alliances of NATO and the
Warsaw Pact both had the choice to arm or disarm. From
each sides point of view, disarming whilst their oppo- In this game, defection is always the best course, implying
nent continued to arm would have led to military inferior- that rational agents will never play. However, in this case
ity and possible annihilation. Conversely, arming whilst both players cooperating and both players defecting ac-
their opponent disarmed would have led to superiority. tually give the same result, assuming no gains from trade
If both sides chose to arm, neither could aord to attack exist, so chances of mutual cooperation, even in repeated
the other, but at the high cost of developing and main- games, are few.
taining a nuclear arsenal. If both sides chose to disarm,
war would be avoided and there would be no costs.
Although the 'best' overall outcome is for both sides to
5.2 Friend or Foe?
disarm, the rational course for both sides is to arm, and
this is indeed what happened. Both sides poured enor- Friend or Foe? is a game show that aired from 2002 to
mous resources into military research and armament in a 2005 on the Game Show Network in the USA. It is an ex-
war of attrition for the next thirty years until the Soviet ample of the prisoners dilemma game tested on real peo-
Union could not withstand the economic cost. The same ple, but in an articial setting. On the game show, three
logic could be applied in any similar scenario, be it eco- pairs of people compete. When a pair is eliminated, they
nomic or technological competition between sovereign play a game similar to the prisoners dilemma to deter-
states. mine how the winnings are split. If they both cooperate
(Friend), they share the winnings 5050. If one coop-
erates and the other defects (Foe), the defector gets all
the winnings and the cooperator gets nothing. If both
5 Related games defect, both leave with nothing. Notice that the reward
matrix is slightly dierent from the standard one given
5.1 Closed-bag exchange above, as the rewards for the both defect and the co-
operate while the opponent defects cases are identical.
Douglas Hofstadter[32] once suggested that people often This makes the both defect case a weak equilibrium,
nd problems such as the PD problem easier to under- compared with being a strict equilibrium in the standard
stand when it is illustrated in the form of a simple game, prisoners dilemma. If a contestant knows that their op-
or trade-o. One of several examples he used was closed ponent is going to vote Foe, then their own choice does
bag exchange": not aect their own winnings. In a specic sense, Friend
or Foe has a rewards model between prisoners dilemma
Two people meet and exchange closed bags, and the game of Chicken.
with the understanding that one of them con-
tains money, and the other contains a purchase. The rewards matrix is
Either player can choose to honor the deal by This payo matrix has also been used on the British
putting into his or her bag what he or she television programmes Trust Me, Shafted, The Bank Job
agreed, or he or she can defect by handing over and Golden Balls, and on the American shows Bachelor
an empty bag. Pad and Take It All. Game data from the Golden
9

Balls series has been analyzed by a team of economists, take place in a post-singularity future. The rst book
who found that cooperation was surprisingly high for in the series was published in 2010, with the two sequels,
amounts of money that would seem consequential in the The Fractal Prince and The Causal Angel published in
real world, but were comparatively low in the context of 2012 and 2014, respectively.
the game.[33] A game modeled after the prisoners dilemma is a central
focus of the 2012 video game Zero Escape: Virtues Last
5.3 Iterated snowdrift Reward and a minor part in its 2016 sequel Zero Escape:
Zero Time Dilemma.
Researchers from the University of Lausanne and the
University of Edinburgh have suggested that the Iterated
Snowdrift Game may more closely reect real-world so- 8 See also
cial situations. Although this model is actually a chicken
game, it will be described here. In this model, the risk Abilene paradox
of being exploited through defection is lower, and indi-
viduals always gain from taking the cooperative choice. Centipede game
The snowdrift game imagines two drivers who are stuck
on opposite sides of a snowdrift, each of whom is given Christmas truce
the option of shoveling snow to clear a path, or remaining
Evolutionarily stable strategy
in their car. A players highest payo comes from leaving
the opponent to clear all the snow by themselves, but the Folk theorem (game theory)
opponent is still nominally rewarded for their work.
This may better reect real world scenarios, the re- Innocent prisoners dilemma
searchers giving the example of two scientists collaborat- Optional prisoners dilemma
ing on a report, both of whom would benet if the other
worked harder. But when your collaborator doesnt do Nash equilibrium
any work, its probably better for you to do all the work
yourself. Youll still end up with a completed project.[34] Prisoners dilemma and cooperation an experimen-
tal study

Public goods game


6 Software
Reciprocal altruism
Several software packages have been created to run pris-
oners dilemma simulations and tournaments, some of Swift trust theory
which have available source code.
Unscrupulous diners dilemma
The source code for the second tournament run by War of attrition (game)
Robert Axelrod (written by Axelrod and many con-
tributors in Fortran) is available online Hobbesian trap

PRISON, a library written in Java, last updated in Liar Game


1998
Axelrod-Python, written in Python (programming
language) 9 References
[1] Milovsky, Nicholas. The Basics of Game Theory and
Associated Games. Retrieved 11 February 2014.
7 In ction
[2] Fehr, Ernst; Fischbacher, Urs (Oct 23, 2003).
Hannu Rajaniemi set the opening scene of his The Quan- The Nature of human altruism (PDF). Na-
tum Thief trilogy in a dilemma prison. The main theme ture. Nature Publishing Group. 425 (6960):
of the series has been described as the inadequacy of 785791. Bibcode:2003Natur.425..785F.
a binary universe and the ultimate antagonist is a char- doi:10.1038/nature02043. PMID 14574401. Re-
trieved February 27, 2013.
acter called the All-Defector. Rajaniemi is particularly
interesting as an artist treating this subject in that he [3] Tversky, Amos; Shar, Eldar (2004). Preference, belief,
is a Cambridge-trained mathematician and holds a PhD and similarity: selected writings. (PDF). Massachusetts
in mathematical physics--the interchangeability of matter Institute of Technology Press. ISBN 9780262700931.
and information is a major feature of the books, which Retrieved February 27, 2013.
10 9 REFERENCES

[4] Toh-Kyeong, Ahn; Ostrom, Elinor; Walker, James (Sep evolutionary opponent. PNAS Early Edition. Retrieved
5, 2002). Incorporating Motivational Heterogeneity into 26 November 2013.
Game-Theoretic Models of Collective Action (PDF).
Public Choice. 117 (34). Retrieved June 27, 2015. [14] Adami, Christoph; Arend Hintze (2013). Evolutionary
instability of Zero Determinant strategies demonstrates
[5] Oosterbeek, Hessel; Sloof, Randolph; Van de Kuilen, that winning isn't everything": 3. arXiv:1208.2666 .
Gus (Dec 3, 2003). Cultural Dierences in Ulti-
matum Game Experiments: Evidence from a Meta- [15] Stewart, Alexander J.; Joshua B. Plotkin (2013). From
Analysis (PDF). Experimental Economics. Springer extortion to generosity, evolution in the Iterated Prisoners
Science and Business Media B.V. 7 (2): 171188. Dilemma. PNAS Early Edition. Retrieved 25 November
doi:10.1023/B:EXEC.0000026978.14316.74. Retrieved 2013.
February 27, 2013. [16] Akin, Ethan (2013). Stable Cooperative Solutions for the
[6] Capraro, V (2013). A Model of Human Coopera- Iterated Prisoners Dilemma": 9. arXiv:1211.0969 .
tion in Social Dilemmas. PLoS ONE. 8 (8): e72427.
[17] Le S, Boyd R (2007). Evolutionary Dynamics of
doi:10.1371/journal.pone.0072427.
the Continuous Iterated Prisoners Dilemma. Jour-
[7] Kaznatcheev, Artem (March 2, 2015). Short history of nal of Theoretical Biology. 245 (2): 258267.
iterated prisoners dilemma tournaments. Theory, Evo- doi:10.1016/j.jtbi.2006.09.016. PMID 17125798.
lution, and Games Group. Retrieved February 8, 2016.
[18] Hammerstein, P. (2003). Why is reciprocity so rare in
[8] Hilbe, Christian; Martin A. Nowak; Karl Sigmund social animals? A protestant appeal. In: P. Hammerstein,
(April 2013). Evolution of extortion in Iterated Pris- Editor, Genetic and Cultural Evolution of Cooperation,
oners Dilemma games. PNAS. 110 (17): 69136918. MIT Press. pp. 8394.
doi:10.1073/pnas.1214834110. Retrieved 25 November [19] Spaniel, William (2011). Game Theory 101: The Com-
2013. plete Textbook.
[9] Shy, Oz (1995). Industrial Organization: Theory and Ap- [20] Nowak, Martin; Karl Sigmund (1993). A strategy
plications. Massachusetts Institute of Technology Press. of win-stay, lose-shift that outperforms tit-for-tat in the
ISBN 0262193663. Retrieved February 27, 2013. Prisoners Dilemma game. Nature. 364: 5658.
doi:10.1038/364056a0.
[10] For example see the 2003 study Bayesian Nash equilib-
rium; a statistical test of the hypothesis for discussion of [21] Markets & Data. The Economist. 2007-09-27.
the concept and whether it can apply in real economic or
strategic situations (from Tel Aviv University). [22] Rehmeyer, Julie (2012-10-29). Game theory suggests
current climate negotiations won't avert catastrophe. Sci-
[11] :: University of Southampton ence News. Society for Science & the Public.

[12] The 2004 Prisoners Dilemma Tournament Results show [23] Osang and Nandy 2003
University of Southampton's strategies in the rst three
[24] Dawkins, Richard (1976). The Selsh Gene. Oxford Uni-
places, despite having fewer wins and many more losses
versity Press.
than the GRIM strategy. (Note that in a PD tournament,
the aim of the game is not to win matches that can eas- [25] Ainslie, George (2001). Breakdown of Will. ISBN 0-521-
ily be achieved by frequent defection). It should also be 59694-7.
pointed out that even without implicit collusion between
software strategies (exploited by the Southampton team) [26] This argument for the development of cooperation
tit for tat is not always the absolute winner of any given through trust is given in The Wisdom of Crowds , where
tournament; it would be more precise to say that its long it is argued that long-distance capitalism was able to form
run results over a series of tournaments outperform its ri- around a nucleus of Quakers, who always dealt honourably
vals. (In any one event a given strategy can be slightly with their business partners. (Rather than defecting and
better adjusted to the competition than tit for tat, but tit reneging on promises a phenomenon that had discour-
for tat is more robust). The same applies for the tit for aged earlier long-term unenforceable overseas contracts).
tat with forgiveness variant, and other optimal strategies: It is argued that dealings with reliable merchants allowed
on any given day they might not 'win' against a specic the meme for cooperation to spread to other traders, who
mix of counter-strategies. An alternative way of putting spread it further until a high degree of cooperation became
it is using the Darwinian ESS simulation. In such a sim- a protable strategy in general commerce
ulation, tit for tat will almost always come to dominate,
[27] Nicholson, Walter (2000). Intermediate Microeco-
though nasty strategies will drift in and out of the popula-
nomics (8th ed.). Harcourt.
tion because a tit for tat population is penetrable by non-
retaliating nice strategies, which in turn are easy prey for [28] Schneier, Bruce (2012-10-26). Lance Armstrong and
the nasty strategies. Richard Dawkins showed that here, the Prisoners Dilemma of Doping in Professional Sports
no static mix of strategies form a stable equilibrium and | Wired Opinion. Wired.com. Retrieved 2012-10-29.
the system will always oscillate between bounds.
[29] Gokhale CS, Traulsen A. Evolutionary games in the multi-
[13] Press, William H.; Freeman J. Dyson (2012). Iterated verse. Proceedings of the National Academy of Sciences.
Prisoners Dilemma contains strategies that dominate any 2010 Mar 23;107(12):55004.
11

[30] The Volokh Conspiracy " Elinor Ostrom and the Tragedy 11 External links
of the Commons. Volokh.com. 2009-10-12. Retrieved
2011-12-17. Prisoners Dilemma (Stanford Encyclopedia of Phi-
losophy)
[31] Stephen J. Majeski (1984). Arms races as iterated
prisoners dilemma games. Mathematical and So- The Bowerbirds Dilemma The Prisoners Dilemma
cial Sciences. 7 (3): 253266. doi:10.1016/0165- in ornithology mathematical cartoon by Larry Go-
4896(84)90022-2.
nick.
[32] Hofstadter, Douglas R. (1985). Metamagical Themas: The Prisoners Dilemma The Prisoners Dilemma
questing for the essence of mind and pattern. Bantam Dell with Lego minigures.
Pub Group. ISBN 0-465-04566-9. see Ch.29 The Pris-
oners Dilemma Computer Tournaments and the Evolution Dixit, Avinash; Nalebu, Barry (2008). Prisoners
of Cooperation. Dilemma. In David R. Henderson (ed.). Concise
Encyclopedia of Economics (2nd ed.). Indianapo-
[33] Van den Assem, Martijn J. (January 2012). Split lis: Library of Economics and Liberty. ISBN 978-
or Steal? Cooperative Behavior When the Stakes 0865976658. OCLC 237794267.
Are Large. Management Science. 58 (1): 220.
doi:10.1287/mnsc.1110.1413. SSRN 1592456 . Game Theory 101: Prisoners Dilemma

[34] Kmmerli, Rolf. "'Snowdrift' game tops 'Prisoners Dawkins: Nice Guys Finish First
Dilemma' in explaining cooperation. Retrieved 11 April
Play Prisoners Dilemma on oTree
2012.

10 Further reading

Amadae, S. (2016). 'Prisoners Dilemma,' Prisoners


of Reason. Cambridge University Press, NY, pp.
2461.

Aumann, Robert (1959). Acceptable points in


general cooperative n-person games. In Luce, R.
D.; Tucker, A. W. Contributions to the Theory 23
of Games IV. Annals of Mathematics Study. 40.
Princeton NJ: Princeton University Press. pp. 287
324. MR 0104521.

Axelrod, R. (1984). The Evolution of Cooperation.


ISBN 0-465-02121-2

Bicchieri, Cristina (1993). Rationality and Coordi-


nation. Cambridge University Press.

Chess, David M. (December 1988). Simulating


the evolution of behavior: the iterated prisoners
dilemma problem. Complex Systems. 2 (6): 663
70.

Dresher, M. (1961). The Mathematics of Games


of Strategy: Theory and Applications Prentice-Hall,
Englewood Clis, NJ.

Greif, A. (2006). Institutions and the Path to the


Modern Economy: Lessons from Medieval Trade.
Cambridge University Press, Cambridge, UK.

Rapoport, Anatol and Albert M. Chammah (1965).


Prisoners Dilemma. University of Michigan Press.
12 12 TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

12 Text and image sources, contributors, and licenses


12.1 Text
Prisoners dilemma Source: https://en.wikipedia.org/wiki/Prisoner{}s_dilemma?oldid=772006600 Contributors: AxelBoldt, Joao,
LC~enwiki, Mav, Bryan Derksen, The Anome, Verloren, Arvindn, Daniel Mahu, Kurt Jansson, Ryguasu, R Lowry, Tedernst, Olivier, Boud,
Michael Hardy, Dominus, Chinju, TakuyaMurata, Eric119, Snoyes, AllanR~enwiki, Poor Yorick, Evercat, Wfeidt, Adam Conover, Dying,
Timwi, Dcoetzee, Sikelianos, Sbloch, Dysprosia, Dandrake, Jwrosenzweig, Doradus, Maximus Rex, Mrand, Furrykef, Hyacinth, Temp-
shill, LMB, Dcsohl, Raul654, Johnleemk, Gakmo, Vt-aoe, Robbot, Pfortuny, Chealer, ChrisG, Fredrik, Chris 73, R3m0t, Romanm, Oji-
giri~enwiki, Yacht, DHN, Saforrest, JackofOz, Robinh, Lpetrazickis, Amead, Helfrich~enwiki, Cyrius, Wile E. Heresiarch, Rik G., David
Gerard, Psb777, Matthew Stannard, Giftlite, Nikodemos, Wolfkeeper, Tom harrison, Martijn faassen, Lupin, Brian Kendig, Gus Polly, Al-
ison, Henry Flower, Duncharris, BillyH, Jason Quinn, Christofurio, Foobar, Edcolins, Moon light shadow, Etaonish, Ruy Lopez, Socrtwo,
OwenBlacker, BookgirlST, Reagle, Neutrality, Grin147, Safety Cap, Discospinster, Solitude, Rich Farmbrough, Qwerty1234, Vapour,
FWBOarticle, Ponder, Gronky, SpookyMulder, Bender235, Andrejj, Jnestorius, DavidScotson, Alex3917, Wolfman, Ascorbic, Lambert-
man, Causa sui, Grick, Cretog8, Orbst, Guido del Confuso, Vintermann, Syzygy, Emhoo~enwiki, Treborbassett, Brainpo, Ral315, BlueN-
ovember, Eruantalon, DannyMuse, Atlant, Plumbago, Hodg, JohnAlbertRigali, Ciaran H, PAR, Mlm42, Hu, Hadija, JeeAge, Yvh11a,
Samohyl Jan, RJII, H2g2bob, SteinbDJ, Alai, Dan East, Erroneous01, LukeSurl, OwenX, Woohookitty, Mindmatrix, Logophile, Oliphaunt,
Kzollman, Ruud Koot, Tabletop, Smmurphy, KFan II, Junes, Cedrus-Libani, Royan, Graham87, Marskell, Deltabeignet, Magister Mathe-
maticae, Rjwilmsi, Chipuni, MarSch, Jmcc150, SpNeo, Denis Diderot, Haya shiloh, XLerate, Ligulem, NeonMerlin, Brighterorange, Ravik,
MapsMan, Wragge, FlaBot, SeptimusOrcinus, RexNL, Arctic.gnome, Pete.Hurd, Simishag, Themissinglint, Spencerk, Toi, Antimatter15,
Haonhien, King of Hearts, Chobot, Ramorum, Volunteer Marek, Jpkotta, Wavelength, RobotE, RussBot, Az7997, Loom91, Taejo, Zarob-
lue05, Wilfried Elmenreich, Chris Capoccia, Inkbacker, Gaius Cornelius, Bruguiea, Trovatore, Mr. Bene, Ondenc, Bossrat, JocK, Dog-
cow, Anetode, Emersoni, Mtu, Maunus, Eurosong, Blueyoshi321, Walkie, Tribaal, StuRat, PaulStephens, Cyrus Grisham, Chriswaterguy,
Chrishmt0423, LeonardoRob0t, Fourohfour, Ilmari Karonen, IxnayOnTheTimmay, ViperSnake151, GrinBot~enwiki, Babij, Ocam, That
Guy, From That Show!, Luk, KnightRider~enwiki, SmackBot, InverseHypercube, McGeddon, Lawrencekhoo, Adrian232, Jtneill, DTM,
Frymaster, Btwied, Mauls, Alsandro, Pathless, Gilliam, Richfe, Angelbo, Brokenxer, Lindosland, Tyciol, Westsider, Audacity, Shag-
gorama, Hibernian, Deli nk, Da Vynci, Colonies Chris, Royboycrashfan, Can't sleep, clown will eat me, OrphanBot, Alaiche~enwiki,
Xyzzyplugh, Lyran, Khoikhoi, DenisDiderot, Bigturtle, EPM, Valenciano, Michelle eris, Xyzzy n, Richard001, Sslevine, Michael Rogers,
Byelf2007, Bando26, Tktktk, Cmh, IronGargoyle, RomanSpa, Erin Billy, , SandyGeorgia, E-Kartoel, Dr.K., Josephus78, Stephen
B Streater, Iridescent, Kencf0618, Jason.grossman, Robfwoods, CapitalR, Trialsanderrors, Dclayh, Tawkerbot2, JRSpriggs, Cryptic C62,
JebJoya, FatalError, CRGreathouse, CmdrObot, Flambelle, Hiroshi-br, Aherunar, CBM, Wwengr, Sdorrance, Myasuda, Gregbard, Ryan-
Design, Reywas92, MC10, Uniqueuponhim, Mariojalves, RZ heretic, YechezkelZilber, Frostlion, Blackmetalbaz, Rracecarr, Jedonnelley,
Tloc, Jay32183, Kozuch, Bhause, PamD, Mattisse, Malleus Fatuorum, Thijs!bot, Jedibob5, Mojo Hand, Headbomb, Luigifan, Morgrim,
Rompe, Lamentation, Luna Santin, Widefox, Ryanhanson, AllanLee, Bona Fides, Ingolfson, Nwe, Smerdis, Mullibok, Hurmata, Ma-
gioladitis, VoABot II, Sodabottle, DEDemeza, Baccyak4H, Froid, Mikhailfranco, KConWiki, WhatamIdoing, Digifruitella, Webponce,
Mikolik, Americanhero, Wwmbes, Asaadi~enwiki, Lvwarren, Damuna, MartinBot, J.delanoy, Rgoodermote, Deadbath, Stazed, Cpiral,
LordAnubisBOT, Grosscha, Maduskis, Bouke~enwiki, Goarany, JayJasper, Space Pirate 3000AD, Quantling, Kenneth M Burke, Re-
drocket, TWCarlson, Cmastris, DASonnenfeld, Cerberus0, Jmrowland, Al.locke, Mkcmkc, GimmeBot, Pandacomics, Hawkinsbiz, Jhs-
Bot, LukeNukem, Calculuslover800, Radeks, Every name is taken12345, BurdetteLamar, Tluckie13, Masterofpsi, Givegains, Chmac,
AdRock, Osterczyk, Thehornet, Zsniew, Happysailor, Likebox, Flyer22 Reborn, Chromaticity, Le Pied-bot~enwiki, Artoasis, ThAtSo,
Dimitridobrasil, Glasbak, Randomblue, WikiLaurent, Vorapsak~enwiki, Jonathanstray, ClueBot, Badger Drink, Foxj, Drmies, Rosuav,
NickLinsky, Fenwayguy, Alexbot, Mprogers, Three-quarter-ten, Sun Creator, Bracton, 7&6=thirteen, Alexey Muranov, Je0106, Eebster
the Great, Jimjmc, Behavioralethics, BigK HeX, AlanM1, XLinkBot, Gerhardvalentin, Ost316, Freddy engels, Fiskbil, Addbot, Dunhere,
Annielogue, Diptanshu Das, Starpollen, LinkFA-Bot, W anthro, Ehrenkater, Tide rolls, Jarble, Legobot, Luckas-bot, Yobot, EchetusXe,
Estudiarme, Backfromquadrangle, Nallimbot, Trinitrix, Nlasbo, AnomieBOT, Erel Segal, Jim1138, Materialscientist, Hunnjazal, Cita-
tion bot, ArthurBot, Simultaneous movement, Xqbot, Tad Lincoln, J4lambert, J JMesserly, Geekguy02, Srich32977, Kithira, RibotBOT,
CompliantDrone, Tl, GliderMaven, FrescoBot, Noisyboy1234, Urgos, Gregvs3, Tavernsenses, Glummy, Citation bot 1, AstaBOTh15,
Carloszgz, LittleWink, Tom.Reding, Bmclaughlin9, RedBot, Iusewiki, Wanani, TRBP, Ras67, Trappist the monk, Lotje, Fedore, Jsevilla,
Brichard37, Whisky drinker, RjwilmsiBot, Tacomaster4, J36miles, EmausBot, Troubled asset, WikitanvirBot, Octopuppy, 848219pineap-
ple, GoingBatty, Snied, Seabreezes1, Philippe277, Akerans, , TurilCronburg, StrategySimulator, Unreal7, Pyrenil, Makecat,
12mmclean, Alessandromerlettidepalo, Barrrower, AeolianMachine, Palosirkka, Donner60, Orange Suede Sofa, Hazard-Bot, ClueBot NG,
Binkyuk, Joefromrandb, Frietjes, Cntras, BracusAnguis, Widr, Helpful Pixie Bot, Stekat, Calabe1992, Bibcode Bot, BG19bot, Endo-
brendo, Brian Tomasik, Tarshizzle, Dan653, Benjamin H-W, Huliosancez, CitationCleanerBot, Natalon, Fbell74, Gprobins, BattyBot,
Editengine, Returner323617, ChrisGualtieri, Krispycalbee, JYBot, SuperbowserX, Dexbot, Caspar42, Mogism, Ssbbplayer, Lugia2453,
Kugo2006, Billyd992, Wikecology, TheFrog001, William Di Luigi, I am One of Many, Ufoneda, Bicycleemoji, Qc1okay, Braveskid1,
Ashutt92, Allasse0927, Jomey, Arco74, Aronjacobson, Jackkenyon, Gruyern, Monkbot, AKS.9955, BethNaught, Hacscience, Gronk Oz,
Pariah24, Latiaslee, Godsy, Atvica, , KasparBot, BU Rob13, Thorkall, BiFur, Orderofthewhitelotus, Wreckless, Lief Amerigo,
KGirlTrucker81, BearGlyph, Bender the Bot, Wmd4444, Stanhalstead216, KAP03 and Anonymous: 583

12.2 Images
File:Deliberations_of_Congress.jpg Source: https://upload.wikimedia.org/wikipedia/commons/2/22/Deliberations_of_Congress.jpg
License: Public domain Contributors: New York Herald (Credit: The Granger Collection, NY) Original artist: W. A. Rogers
File:IPD_Venn.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1a/IPD_Venn.svg License: CC BY-SA 3.0 Contributors:
Own work Original artist: Jplotkin8
File:Lock-green.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/65/Lock-green.svg License: CC0 Contributors: en:File:
Free-to-read_lock_75.svg Original artist: User:Trappist the monk
File:Prisoner{}s_Dilemma_briefcase_exchange_(colorized).svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fc/
Prisoner%27s_Dilemma_briefcase_exchange_%28colorized%29.svg License: CC BY-SA 3.0 Contributors: Own work Original artist:
Christopher X Jon Jensen (CXJJensen) & Greg Riestenberg
12.3 Content license 13

File:Prisoner{}s_dilemma_payoff_matirx.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/bc/Prisoner%27s_


dilemma_payoff_matirx.jpg License: CC BY-SA 4.0 Contributors: Own work Original artist: Jimjmc
File:Prisoners_Dilemma.ogg Source: https://upload.wikimedia.org/wikipedia/commons/1/19/Prisoners_Dilemma.ogg License: CC-
BY-SA-3.0 Contributors:
Derivative of Prisoners Dilemma Original artist: Speaker: JebJoya
Authors of the article
File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0
Contributors:
Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:
Tkgd2007
File:Sound-icon.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/47/Sound-icon.svg License: LGPL Contributors:
Derivative work from Silsor's versio Original artist: Crystal SVG icon set

12.3 Content license


Creative Commons Attribution-Share Alike 3.0

S-ar putea să vă placă și