Sunteți pe pagina 1din 25

Twenty-Five On Author(s): David Gauthier Source: Ethics, Vol. 123, No.

4, Symposium: David Gauthiers Morals by Agreement (July 2013), pp. 601-624 Published by: The University of Chicago Press Stable URL: http://www.jstor.org/stable/10.1086/670246 . Accessed: 13/08/2013 08:47
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp

.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

The University of Chicago Press is collaborating with JSTOR to digitize, preserve and extend access to Ethics.

http://www.jstor.org

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

Twenty-Five On* David Gauthier


This article updates Morals by Agreement. It distinguishes two opposed conceptions of deliberative rationalitymaximization and Pareto-optimization. It defends the latter. The constrained maximizers of Morals by Agreement are replaced by rational cooperators. They do not bargain but reach agreement on the principle of maximin proportionate gain, which is a relabeling of maximin relative benet. The contractarian test, of the acceptability of social arrangements and norms, is introduced, and the Lockean proviso assumes an enhanced role as a cornerstone of rational cooperation. But questions about the force and rationale of the proviso remain.

Morals by Agreement has reached and now passed the age of twenty-ve and seems to have found a niche among some of those who remain unpersuaded by either Kantianism or utilitarianism. It takes morality, or at least that part of it that concerns society and justice, to set out the rules that rational agents would agree to follow in their interactions one with another. This is the approach taken by Thomas Hobbes in his account of the laws of nature as the true and only moral philosophy.1 Morals by Agreement attempts to present an up-to-date version of Hobbess approach. In the years since it was published, I have continued to reect on this enterprise and, aided by my critics, have realized the inadequacy of some of its parts. In this article I sketch my reections. But I should acknowledge at

* This article is revised from my talk at the conference celebrating the twenty-fth anniversary of the publication of Morals by Agreement, held in May 2011 at York University in Toronto, and organized by Susan Dimock, to whom I am more than grateful. She helped edit the paper and also provided comments on it, as did Christopher Morris. My thanks to him, and also to Claire Finkelstein, for her role in arranging three-way conversations among Chris, her, and myself. And my gratitude extends to all those who participated in the conference, making it philosophically and personally memorable. 1. David Gauthier, Morals by Agreement Oxford: Oxford University Press, 1986; Thomas Hobbes, Leviathan, ed. Edwin Curley Indianapolis: Hackett, 1964, 100. Ethics 123 ( July 2013): 601624 2013 by The University of Chicago. All rights reserved. 0014-1704/2013/12304-0002$10.00

601

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

602

Ethics

July 2013

the outset that, while this may be my last word, it will not be the last word on the contractarian enterprise. I. EVALUATION AND CHOICE My point of departure is a widely held conception of rational agency: Of two alternatives which give rise to outcomes, a rational agent will choose the one which yields the more preferred outcome, or, more precisely, in terms of the utility function he will attempt to maximize expected utility. If you are familiar with Luce and Raiffas Games and Decisions, the classic text for those of us who have found that the theories of games and rational choice illuminate traditional philosophic concerns with rationality and morality, you might take this to be quoted from page 50.2 And with the exception of the words rational agent, it is. They speak only of a player which serves for an agent since they are presenting the theory of games; they do not speak of a rational player. And this makes an enormous difference. They are not characterizing rational agency but, rather, identifying choice and preference; preference is what is revealed in choice. Their account, called revealed preference theory, allows no conceptual space between preference and choice. If an agent chooses a rather than b, then the agent prefers a to b. There can be no counterpreferential choice. I shall not discuss this position, although it is favored by many economists, and I introduce it only to distinguish it from the view I shall discuss. Much of what I want to say would make no sense if revealed preference theory were correct. Some of my critics, such as Ken Binmore, simply state that modern utility theory . . . is now based, in principle, on the choice behaviour of the decision-maker.3 And from that premise they have little difculty in showing my ways to be in error. But what they need to show is that I am in error in departing from their identication of choice and preference, and that would be a quite different criticism, one that to the best of my knowledge has not been made. And so I return to the view in the pseudo-quote, which implies that counterpreferential choice is not impossible but is always irrational. Maximize is not a description, but an injunction. Should we accept it? I shall assume that there are two logically and conceptually distinct procedures: evaluating possible outcomes, or more generally objects and states of affairs, and choosing or deciding on an action. The view that I want to consider supposes that the former provide the grounds for the
2. R. Duncan Luce and Howard Raiffa, Games and Decisions New York: Wiley, 1957, 50. 3. Ken Binmore, Bargaining and Morality, in Rationality, Justice and the Social Contract, ed. David Gauthier and Robert Sugden Hemel Hempstead: Harvester Wheatsheaf, 1993, 13156; see 136.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

Gauthier

Twenty-Five On

603

latter, so that one chooses in the light of ones evaluations. And this light takes a maximizing form; the rational agent seeks in her choices to maximize the utility that represents her evaluations. II. ONE-PERSON CHOICE SITUATIONSMAXIMIZE! Consider the formally simplest choice problemone agent, with several possible actions each yielding a determinate outcome. If the agent is able to establish a complete, transitive preference ordering over the outcomes, then clearly he chooses rationally if and only if he selects an action with one of the most preferred outcomes. If we call an outcome maximal if it is dispreferred to no other, then the rational choice is that of an action whose outcome is maximal relative to the feasible set. There is one complication. It may be that the cost of evaluating all possible outcomes seems likely to exceed the benet of determining which one of them is maximal. It may then be rational for an agent, in the light of his initial information about the outcomes, to set a threshold of acceptability and choose the rst action to come to his attention that meets the threshold. This is a satiscing procedure. Understood, as it should be, simply as a way of adapting a maximizing ideal to the circumstances of real-world decision, it poses no theoretical challenge to that ideal. Suppose next that each possible action yields, not a determinate outcome, but a probability distribution over the members of the set of all possible outcomes. Given some mightily implausible assumptions about an agents capacity to form consistent preferences for all possible gambles over all possible outcomes,4 we can dene a real-valued function which assigns an expected utility to each probability distribution, equal to the sum of the utilities of the outcomes each multiplied by its probability. Choosing an action correlated with the most preferred outcome is then equated with choosing an action with greatest expected utility. Expected utility is not utility. It is a construct from preferences, intended to provide a basis for choice when no action can be correlated directly with the agents most preferred outcome. Expected utility, and probability distributions over outcomes, are not the stuff from which experiences are made. Outcomes are experienced. Possible outcomes can usually be at least partially anticipated in imagination, but not probability distributions over outcomes, which have no phenomenal content. This difference may cast some doubt on expected utility as the ground for rational choice. And at a practical level, expected utility surely makes more demands of an agent than he can normally fulll. But at a conceptual level, is there a plausible alternative? In practice, we cope with the uncertainty of outcomes through many ad hoc deviceswe suppose that if
4. These assumptions are discussed sympathetically in Morals by Agreement, 3846.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

604

Ethics

July 2013

some outcome is very unlikely, then it wont happen and we ignore it in our calculations. And we suppose the converse, that if some outcome does happen, then its not very unlikely, so we may weigh it too heavily in our calculations. But this is at the level of practice. The best theory for oneperson decision problems underwrites expected utility maximization. III. TWO AND MORE PERSON CHOICE SITUATIONSMAXIMIZE? Armed with this success, the theorist of rational choice moves on from the province of individual decision to that of interaction, traditionally if not altogether perspicuously labeled the theory of games. And not surprisingly, he looks on interaction from a maximizing perspective. He seeks an account of rational choice in interaction parallel to that in decision, an account based on the thesis that rational choice maximizes expected utility. For my present purpose, which is to undermine the holdthe strangleholdon interaction that the maximizing perspective induces on rational choice, we need consider only one argument that applies to all situations with nitely many agents and nitely many possible actions available to each agent. The argument, found in Games and Decisions, yields an a priori demand to be met by any theory of strictly competitive games, but as E. F. McClennen, who offers the clearest statement of the argument, points out, the argument can be applied to any game. It has therefore become central to a general theory of games.5 What is this master argument? Consider a game or interaction in which all relevant matters are and are known to be common knowledge the rationality of each agent, the actions possible for each, the possible outcomes, the relation of the actions to the outcomes, and the value each agent places on each possible outcome. Everyone knows, and knows that everyone knows, the full strategic structure of the interaction. Now suppose that there is a theory of rational choice that selects as rational an action for each agent. This theory is also common knowledge, so that each agent, knowing the full strategic structure, knows what the theory prescribes, not only for herself, but for everyone. Now it is surely a condition of any acceptable theory, that if an agent knows what it prescribes for everyone else, she not have reason to reject its prescription for herself. But if we adopt a maximizing perspective, this entails that in the ideal case of common knowledge of rationality and the strategic structure of the interaction, the theory prescribe that action that maximizes the agents expected utility given the actions it prescribes for the other agents. If we call that action the agents best reply, then the theory must prescribe each agents
5. Edward F. McClennen, Rethinking Rationality, in Reasons and Intentions, ed. Bruno Verbeek Aldershot: Ashgate, 2008, 3765, 42.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

Gauthier

Twenty-Five On

605

best reply to the actions it prescribes for the others. A mutual best reply is an equilibrium outcome. Among his other contributions to game theory, John F. Nash6 proved that if there are nitely many agents each with nitely many actions, then there must be at least one weak equilibrium outcome weak in that the action prescribed for each must be a best reply, but need not be a unique best replythere may be other actions equally good from the agents point of view. And of course there may be multiple equilibria, and these equilibria need have nothing in common, either in the actions they prescribe or in the values of the outcomes. So the requirement that a theory of rational choice prescribe in the ideal case a best reply strategy to each agent can only be a necessary and not a sufcient condition on the theory. But it is difcult to see how, from a maximizing perspective, it could fail to be a necessary condition. A maximizing theory cannot prescribe one action to an agent if in the circumstances a different action would afford her greater expected utility. IV. A SUPERIOR PERSPECTIVE We have examined and accepted the maximizing requirement for oneperson decision problems, and we have now seen how it would affect problems of interaction. But should we accept its effectshould we suppose that rational agents are committed to equilibrium or best reply strategies in ideal cases? This is the rst crucial point in my argument. I reject the maximizing perspective on interaction. I reject it because, as I shall now argue, there is a superior perspective, though it is not always available to a single agent. When I began examining rationality in decision and choice, the phrase Prisoners Dilemma often met with a blank stare. It has now, rightly, become part of the accepted lexicon. The simplest form of the Prisoners Dilemma is shown in this matrix:
Column C1 Row R1 R2 Third best for both Rows worst, Columns best C2 Rows best, Columns worst Second best for both

There are two agents Row and Column; each has two possible actions or strategies. The matrix shows the payoff to the agents for each of the four possible outcomes.
6. Luce and Raiffa, Games and Decisions, 106; John F. Nash, Non-cooperative Games, Annals of Mathematics 54 1951: 28695.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

606

Ethics

July 2013

There is surely nothing more to discover in the dilemmaexcept what is perhaps the most important thing of all. For what I nd in the dilemma is not what has generally been ascribed to it. I nd a clash between two distinct conceptions of rationality and the beginning of an argument against the directly maximizing perspective. In the dilemma, each agent has a strategy that maximizes his utility against the strategy of the other, whatever strategy the other may choose. It is the agents best reply to every strategy of the other. In the matrix, Rows best reply is evidently R1, Columns is C1. In game theory, a strategy that is always best, whatever the others may be, is termed a strongly dominant strategy. And when a strongly dominant strategy is available, adherence to it is both a necessary and sufcient condition of rationality from a maximizing perspective. But in the dilemma, if each agent chooses his strongly dominant strategy, both will nd themselves worse off than if they had both selected their alternative, dominated strategies. In the matrix, the dominant strategies of Row and Column yield each agent his or her third best outcome whereas if both had chosen their dominated strategies they would have attained their second best outcome. How much worse off each will be depends, of course, on the difference in the payoffs of each outcome, in relation to the other payoffs the agent receives. We introduce the concept of optimality to express precisely what is involved here. An outcome is optimal for an agent just in case it affords him the greatest payoff compatible with the payoffs it affords the other agents. If an outcome is optimal for all agents, so that it affords each the maximum payoff possible given the payoffs it affords the other agents, then we call it Pareto-optimal, after a word never used by the Italian economist Vilfredo Pareto.7 Another way of dening Pareto-optimality, which it will prove useful to have in mind, is to say that one outcome is Pareto-superior to another just in case it affords some agent a greater payoff, and no agent a lesser payoff, than he is afforded by the other; an outcome is then Pareto-optimal if and only if there is no outcome Pareto-superior to it. It is worth noting that Pareto-optimality has no application to oneperson decision problems. It also has no application to the interaction problems that were the rst concern of the founders of game theory two-person, zero-sum games. Since the payoffs in these games can always be represented as p and 2p, no outcome can be Pareto-superior to any other. Pareto-optimality thus appears on the scene only after maximization has established itself. It has been thought of as a conception that must accommodate itself to maximizing ideas, rather than, as I now am proposing, an alternative candidate for what constitutes rational action. Instead of supposing that an action is rational only if it maximizes the
7. Or so I read in a source now lost.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

Gauthier

Twenty-Five On

607

agents payoff given the actions of the other agents, I am proposing that a set of actions, one for each agent, is fully rational only if it yields a Paretooptimal outcome. I hasten to emphasize that once again we have only a necessary, and not a sufcient, condition for rationality. If we dene the Pareto-frontier as the set of all the Pareto-optimal outcomes, then we seek a principle or procedure for selecting among outcomes on the frontier. And this will prove a more difcult matter than the mere insistence on Paretooptimality. But before I turn to that issue, more needs to be said about the role I believe Pareto-optimality should play in rational choice theory. To the maximizers charge that it cannot be rational for a person to take less than he can get, the Pareto-optimizer replies that it cannot be rational for each of a group of persons to take less than, acting together, each can get. The Prisoners Dilemma exemplies a fundamental feature of interactionthat in many situations, every equilibrium outcome leaves the agents with unrealized benets that could be realized were they to coordinate their actions in an appropriate way, a way that can be determined by starting from a consideration of the payoff structure of the possible outcomes and arguing back to the actions necessary to realize a mutually benecial Pareto-optimal payoff. The agents are, after all, directly concerned with what they realize from interaction, and only indirectly with what actions they perform except in relation to what they realize. If it is benecial for them to join together and cooperate, then this is what, insofar as they are rational, they will do. The directives issued by what I am calling a Pareto-optimizing theory of rational choice will differ in one signicant respect from those issued from a maximizing perspective. A maximizing account will prescribe an action for an agent in any circumstances by invoking best reply considerations. If the other parties to interaction are known to be irrational or uninformed in certain ways, then the directives issued from a maximizing perspective will accommodate this. What each person should do is to maximize his expected utility in his actual circumstances, not to do what would maximize his expected utility in circumstances that would be ideal but do not actually obtain. A Pareto-optimizing theory, however, provides only a single set of directives to all the interacting agents, with the directive to each premised on the acceptance by the others of the directives to them. If the others are not prepared to cooperate, then an individual may have no way of reaching on his own an outcome that is optimal or, if optimal, that offers an acceptable division of the payoffs. In a Prisoners Dilemma, a Pareto-optimizing analysis prescribes the dominated strategy for each, but if one agent can be expected to follow maximizing reasoning, were the other to attempt to optimize, he would have to choose his cooperative strategy, which would leave him with his worst possible outcome. It would be Pareto-optimal, because, being the other agents

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

608

Ethics

July 2013

best outcome, no alternative would be Pareto-superior to it. But it would clearly not provide an acceptable division of the payoffs. What a would-be cooperator should do when others are unwilling to cooperate in an acceptable way is a matter not directly addressed by a Pareto-optimizing account of rational choice. It may seem that in such circumstances, a rational agent can only fall back on individual maximization. But I shall argue presently that this is not quite so. A Pareto-optimizing account of rational choice ascribes to each person the capacity to coordinate her actions with those of her fellows, and to do so voluntarily, without coercion. It treats the exercise of this capacity as rational, when the person sees the outcome of coordination as reasonably efcient, so that no signicant possible benet is left unrealized, and reasonably fair, in that no one can reasonably complain that her concerns were not taken sufciently into account, in determining the outcome to be achieved by coordination. To be sure, reasonable, efcient, and fair are evaluative or normative terms that may seem questionbegging. I shall have to defend my use of them. But at this point I want only to note that it is not the mere exercise of the capacity to coordinate that is in itself rational. Coordination need not result in efcient and fair cooperation. The point is rather that without the capacity to coordinate their actions with those of their fellows, agents would be unable to engage in rational cooperative interaction. The capacity to coordinate thus makes possible behavior that does not reduce to maximization but may be none the less rational. What I want to do is to explicate this behaviorcooperation on terms of reasonable efciency and fairness. V. RATIONAL COOPERATION First I must clear up some confusion in the approach of my prior self. In my previous study of rational choice, I introduced the phrase constrained maximization to characterize the choice procedures appropriate to rational cooperation.8 My underlying supposition was that rational agents would constrain their pursuit of their own greatest utility in order to bring about mutually advantageous Pareto-optimal outcomes, when straightforward maximization, calling for best reply strategies, would yield only nonoptimal returns. But I now think the label of constrained maximization was a mistake. Rational cooperators, as I now view them, do not interact on a maximizing basis. They cooperate, as I shall shortly argue, on an agreed basis, and there is no maximal bottom line to ground their cooperation. Faced with an interaction, they take their reasons for acting from considerations of fair Pareto-optimality, rather than maximizationof course, always provided they may expect their fellows to do likewise. Recall that
8. Gauthier, Morals by Agreement, 16770.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

Gauthier

Twenty-Five On

609

on a Pareto-optimizing account a single directive is issued to all those interactingto us, as it were, and to me only as one of us. Considerations that count as reasons for me as one of us would not count as reasons for me if I may not assume the us. When faced with the need to choose, cooperators are payoff oriented rather than strategy oriented. They ask which of the feasible joint payoffs it would be reasonable for them to accept and we shall have to consider how this question may be answered. They do not ask which of his feasible strategies each should accept, given what he expects the others to do. A more informative if inelegant term for rational cooperators would be agreed Pareto-optimizers. And my claim is that if cooperation on agreed terms is to be had, then a rational agent will optimize; only if cooperation is not to be had will he maximize. Note that rational cooperators need not seek a collective or substantively common good. Each is concerned to realize his own good, as expressed by his utility function. How his good relates to that of his fellows is of course important in determining the possible extent of cooperation, but only if the conditions for realizing one persons good are strictly incompatible with the conditions for realizing anothers is cooperation ruled out. Thus rational cooperation yields mutual fulllment; it need not afford common or collective benet. But the cooperator manifests her concern with her good in a quite different way than does the straightforward maximizer who is wedded to best reply calculations about expected utility. In recognizing the other parties to interaction as individuals like herself, she is aware that the terms on which she considers it rational to act must be paralleled by the terms that others consider rational. Thus her own good enters into the determination of the appropriate Pareto-optimal state of affairs but in the same way as does the good of each other person. We must now consider what that way is. VI. MAXIMIN PROPORTIONATE GAIN Rational cooperators must be agreed among themselves as to which of the usually many Pareto-optimal outcomes they will seek to realize. The Prisoners Dilemma obscures this need, since the symmetry of the dilemma in its usual form makes the answer evident. But usually the answer is not evident. My rst thought was that the agents should select the outcome by bargaining, and I took my task to be to endorse one of the existing proposals about rational bargaining or to nd a better account. I examined the several views of bargaining discussed by Luce and Raiffa,9

9. Luce and Raiffa, Games and Decisions, 12152.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

610

Ethics

July 2013

found them all implausible, and produced my own.10 Had I solved the bargaining problem? No. I realized this listening to a talk by Ariel Rubinstein.11 The bargaining problem, as understood by game theorists, assumes that the bargainers are rational maximizers. Being maximizers, they must nd that adherence to whatever agreement they make is their best reply to adherence by the other party or parties; how this is to be achieved is a problem to which I shall shortly return. But suppose it solved. Then we askwhat agreement would it be rational for these persons to make? An answer to this question would show how bargaining, with its cooperative outcome, may be embedded into a maximizing framework. Rubinsteins achievement was to show that the question could be answered and that the answer is a generalization of an axiomatic proposal by our old friend John Nash.12 Nashs most controversial axiom was an independence of irrelevant alternatives postulate that I found implausible. But Rubinstein, in taking the argument to a deeper level, showing, as it were, the maximizing foundation of cooperation, presented an obstacle that I could neither dismiss nor overcome. My rst reaction was to acquiesce and accept the Nash solution as setting out the distributive conditions of rational cooperation.13 But that proved a blind alley. What I had failed to recognize was that the bargaining problem, as traditionally conceived, belongs within the scope of the maximizing perspective. Once I replaced that perspective with Pareto-optimality, my problem was to nd the conditions for rational voluntary agreement among persons disposed to coordinate their actions with those of their fellows, provided they judged that their own concerns received adequate consideration. Enter the principle of maximin relative benet or gain. The principle is quite simple. Recall that utility functions, as I characterized them previously, are dened over a single agents preferences; they do not provide a common measure or any basis for interpersonal comparisons. Both the unit and the zero point of each persons utility scale may be selected arbitrarily, and so the scales of different persons cannot be meaningfully compared. Thus a utilitarian, welfare-maximizing approach founders immediately; there is no common utility to be maxi10. See David Gauthier, Bargaining and Justice, in the collection Moral Dealing: Contract, Ethics, and Reason Ithaca, NY: Cornell University Press, 1990, 187206, for a discussion of these matters. 11. The talk, on noncooperative bargaining and the Nash solution, was presented to the International Conference on Game Theory held in June 1991 in Fiesole. Rubinsteins argument is briey sketched in Binmore, Bargaining and Morality, 14951. 12. John F. Nash, The Bargaining Problem, Econometrica 18 1950: 15562, and TwoPerson Cooperative Games, Econometrica 21 1953: 12840. 13. See David Gauthier, Uniting Separate Persons, in Gauthier and Sugden, Rationality, Justice, and the Social Contract, 17692, 17779.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

Gauthier

Twenty-Five On

611

mized. But suppose we determine, for each interacting agent, a payoff which affords him none of the benets that may be realized through cooperative interaction; call this his cooperative minimum. Then let us determine a payoff which affords him all of the possible benets that he could obtain from such interaction, assuming only that every other agent would receive at least his own cooperative minimum; call this the rst agents cooperative maximum. The agents potential cooperative gain is then the difference between these payoffs. And for each possible outcome, his actual cooperative gain will be the difference between its payoff to him and his cooperative minimum. The cooperative minimum and maximum set the limits of rational cooperation. It would not be rational for an agent voluntarily to coordinate his actions with those of his fellows, unless he could expect to benet thereby and thus gain more than his cooperative minimum. And it would not be rational for other agents to cooperate rationally with him, unless they too could expect to benet thereby and thus hold him to less than his cooperative maximum. Now comes the crucial step. For each person and each possible outcome, divide the agents actual cooperative gain by her potential cooperative gain. This shows the actual gain from that outcome as a proportion of the agents potential gain; call it the agents proportionate gain. And it may easily be shown that this proportion is independent of the zero point and unit of the agents utility function and, indeed, enables us to compare the proportionate gains of different agents. For all agents, the cooperative minimum represents a proportionate gain of 0; the cooperative maximum represents a proportionate gain of 1, and all other outcomes have a proportionate gain between 0 and 1. In Morals by Agreement I use the term relative benet for what I am now calling proportionate gain. But the punch line is unchanged. Each outcome may now be represented as a set of proportionate gains, one for each agent. For each outcome we select the minimum proportionate gain, the smallest it affords to any agent. We then compare the minimum proportionate gains afforded by the different outcomes and select that outcome affording the maximum minimum proportionate gain to any agent. This is the principle of maximin proportionate gain14 in Morals by Agreement, maximin relative benet.15
14. In Morals by Agreement maximin relative benet is presented as the dual of minimax relative concession, which is introduced as a principle of rational bargaining. Here I treat maximin relative benet as directly grounded in the idea of rational cooperation. 15. I can now state the formal difference between my account of rational cooperation and Nashs treatment of the bargaining problem. My account is sensitive to both the cooperative minimum and the cooperative maximum. Nash appeals to an equivalent of the former, but the cooperative maximum would be just another irrelevant alternative in his argument.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

612

Ethics

July 2013

The principle of maximin proportionate gain ensures that each person may expect, ex ante, to gain from cooperative interaction. Because we can act only in the light of expected payoffs, some persons may fail to receive any actual gain, and others an excess, but this is unavoidable given the uncertainties in the relationship of actions and outcomes in choice situations. But the principle does more than ensure every persons expectation of gain. If we think of each persons expectation in terms of the proportion of potential cooperative gain she anticipates, then the least expectation is as great as possible. Were anyones expectation of gain increased, then some other person would nd her expectation reduced to a smaller proportion of potential cooperative gain than any person need anticipate. In maximizing the minimum proportionate gain, the principle affords the fullest possible consideration for the concerns of the person who gains proportionately least from cooperation. The principle of maximin proportionate gain singles out a particular Pareto-optimal outcome and deems it the rational outcome for cooperators. But may there not be alternative principles, arising from alternative ways of inducing interpersonal comparisons of the gains from cooperation? Or may there not be principles arising from considerations that are not represented in utility functions? For example, may not the contribution each agent makes to the gains from cooperation, could we but measure it, provide a basis for determining how the gains should be shared? I leave these as questions. But I would insist that any proposed alternative to maximin proportionate gain must incorporate, as rational requirements, that the outcome it proposes be both relatively efcient, so that it approaches Pareto-optimality, and relatively fair, so that it represents an expected gain for each person comparable in whatever way comparability can be induced to the gain of every other person. These requirements may fall short of providing a strict demonstration that the idea of proportionate gain offers the best way of assessing the rationality of cooperation. But they set out the conditions that any alternative method of assessment would have to meet. Maximizers are of course not unmindful of the benets cooperation can bring. I have mentioned Rubinsteins account of bargaining, in which cooperation is grounded in a noncooperative framework. But as I noted, orthodox game theorists must assume that agents are able to make binding agreements with each other, in order to consider what agreements would be rational to make. But agreements can be binding on maximizers only if each agent can expect keeping the agreement to afford him greater utility than breaking it. And since the agreed outcome may not be in itself an equilibrium, the would-be cooperators must have recourse to some enforcement device. But any such device, as Ned McClennen has argued, comes at a cost, and so leaves the agents in a position Pareto-inferior to what would result from voluntary compliance with

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

Gauthier

Twenty-Five On

613

the agreement.16 Cooperation among persons who are motivated only by best reply considerations, will rest on covenants which without the sword are but words,17 and the sword must be paid for. From the perspective of genuine cooperators, the cooperation generated among maximizers must fall short of the ideal.18 But we should now ask, of what use is our principle? On the face of it, maximin proportionate gain would seem to have rather limited application in our practical reasoning. If a number of persons join in some determinate cooperative enterprise, where they can identify an initial position for each of them, and a set of possible outcomes for the enterprise, they could appeal to maximin proportionate gain in selecting the outcome on which they should coordinate. But most of everyday life is not made up of such determinate undertakings. Of greater signicance is the potential role of the principle in assessing basic elements, institutions and practices, in the structure of society. The principle requires benets for all, and while it does not seek to equalize these benets in absolute termsindeed, it does not provide any way to make interpersonal comparisons of the magnitude of benetsit does ensure that no person expects a smaller proportion of his possible benet than need be. And this principle would not judge most human societies favorably. Almost universally, an elite group of males organize society so that they take the lions share of the benets it provides. Of course, they do not pretend that their society is a cooperative endeavor meeting standards of efciency and fairness. But if it is not such an endeavor, then why should most persons consider themselves to have any reason voluntarily to adhere to its edicts, to follow its practices, or to accept its constraints? This is not a rhetorical question; I shall have to defend the idea that a society that claims that its practices and edicts give its members reason to act must satisfy the principle of maximin proportionate gain. This will take us deeper into the principles underpinning, from which we shall draw out the consequences for rational deliberation. But before proceeding with this, it may be useful to recapitulate my argument about cooperation. First of all, maximizing reasoning in interaction leads, in the ideal case, to best-reply choices and equilibrium outcomes. Second, in
16. McClennen, Rethinking Rationality, 4344. 17. Hobbes, Leviathan, 106. 18. Orthodox maximizers have another approach to cooperation, arguing that it can emerge in repeated interactions. In each interaction considered alone maximizers would nd cooperation irrational. But in a sequence of similar interactions, if some slight perturbation in the assumption of perfect information about the rationality of all agents enters, then it could prove rational for agents to cooperate as long as the sequence continued indenitely into the future. I cannot assess this approach here, except to ag the admission that it depends on the agents lacking perfect information about one anothers rationality.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

614

Ethics

July 2013

many situations, no equilibrium outcome is Pareto-optimal, so that maximizers fail to realize all of the benets that interaction can yield. Third, rational persons will seek to obtain these benets, and to do so they must cooperate, each choosing in accordance with a single set of directives that, if followed by all, yields a Pareto-optimal outcome. Fourth, the standard for such directives is provided by the demands of reasonable efciency and fairness, which I argue are best captured by the principle of maximin proportionate gain, which compares outcomes in terms of the proportion of potential cooperative gain obtained by each person, and makes the least proportion of potential gain as great as possible. But fth, the principle of maximin proportionate gain has little direct relevance to our deliberations, an issue that we have yet to explore. And sixth, we do not treat persons engaged in cooperation as maximizers who constrain their choices to yield an optimal result, but as cooperators, who seek to bring about a Pareto-optimal result whose payoffs are acceptable to all. They view cooperation as directly rational, and go on to establish the rational mode of cooperation. VII. RATIONAL DELIBERATION: REASONS AND GOOD REASONS I have developed the argument of this article from the perspective of rational choice theory. However great my departures from orthodoxy may be, I have treated an agents reasons for acting as based on her considered preferences, as manifested in her utility function. But I could have written this article from another perspectivecall it that of rational deliberation. That is the perspective of what I had expected to be my nal contribution to the study of practical reasonFriends, Reasons and Morals.19 In returning to the rational choice perspective for this present article I have tried to relate my current position as closely as possible to that of Morals by Agreement so that the two can most readily be compared. But I think that either perspective will lead to what I consider the nal fruits of my philosophical endeavorsthe contractarian test, and the Lockean proviso. If we examine real-life instances of rst person deliberation or second person advice, we shall nd that those considerations that persons take to be reasons often seem to have little relation to maximal or Paretooptimal utility. Instead we nd ample reference to considerations relating to rights, roles, rules, and relationshipsconsiderations that on the face of it have nothing to connect them directly to either a maximizing or a Pareto-optimizing act. We also nd normative expectationsFaculty members are expected to attend the departmental colloquium on a
19. David Gauthier, Friends, Reasons, Morals, in Verbeek, Reasons and Intentions, 1736.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

Gauthier

Twenty-Five On

615

regular basisas ubiquitous sources of purported reasons for actions. But to be responsive to expectations is not, I think, plausibly modeled by locating the outcome of the expectation in an evaluative ordering. The limitations of rational choice theory begin to appear when we consider the normative context of our deliberations. That context is far richer and more varied than is provided by the agents evaluation of outcomes. It embraces the full social setting in which agents nd themselves. And nd is the right word here; people do not choose the normative structures that embrace them. This is not the occasion to explore in depth how appeals to social roles and practices and to expectations are supposed to provide reasons for action.20 What concerns me here is one fundamental question: what if anything gives such considerations their rational authority? Persons can and do refer to what is required or expected of them both as explanation of and justication for what they do. But what weight do these considerations really carry? Not every expectation is warranted, not every social practice is desirable. A consideration that an agent takes as a reason, so that it weighs in his deliberations, and takes to be a reason, so that he supposes himself justied in giving it weight, may fail to be a justifying consideration. Can we say anything about good reasons, reasons that provide both explanation and justication? The theory of rational choice, interpreted as treating preferences as rational grounds of action and not simply as revealed in choice offers to do just this. And my quarrel with it in simple maximizing form does not close the door on treating good reasons as those considerations that relate an agents preferences to her choices. Indeed, I suppose that an agents preferences are, in many circumstances, grounds for good reasons. I disagree with orthodox choice in taking other considerations as also providing good reasons for action and in denying that the role played by preferential reasons requires dening utility as a measure of preference that the rational agent must seek to maximize. These are large differences but I cannot discuss them further here. What I shall discuss is the role of standards of reasonable efciency and fairness, and more precisely the principle of maximin proportionate gain in validatingor invalidatingthe taking of social expectations and roles as providing reasons for actions. First, a word on the evident connection between belief and reason. An agent may decide rationally, given his beliefs, but insofar as his beliefs are false, they do not have the justicatory force that he ascribes to them. But the agent may not behave irrationally in acting as he does. The Aztecs sacriced thousands of prisoners of war to satisfy their hungry gods. Did they have good reason to do this? Neither a simple yes nor a simple no
20. Obvious title for paper doing this: Great Expectations.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

616

Ethics

July 2013

offers a correct picture of their situation. They could explain their behavior in terms of their beliefs and justify it relative to those beliefsthey may have the same subjective landscape as we suppose for our own explanations and justications. Of course we might insist that even given their beliefs, they had insufcient reason to sacrice thousands of captives. But I think we would be mistaken to suppose this. The belief that the gods demanded human blood was not peculiar to them, but part of the shared system of beliefs common to the various native peoples of Mexico. And the demands of the gods are not easily put to one side. I am inclined to think that it is our understandable inability really to think ourselves into their situation that explains our reluctance to accept their behavior as rational given their beliefs. And of course it would be clearly mistaken to suppose that false beliefs excuse whatever is alleged to be based on them. The Nazis based their anti-Semitism on their largely false beliefs, but that doesnt make their genocidal conduct any less horrendous. That they took their beliefs to justify genocide is the horror. And with these inadequate remarks, I put aside the relation of belief to practical reasons. VIII. A COOPERATIVE VENTURE In many of my inquiries, I have quoted, favorably, John Rawlss claim that a society is a cooperative venture for mutual advantage.21 I want to make one amendment; advantage is not the right word here, suggesting both a competitive or positional orientation22 only partially offset by mutual, and a focus on goods rather than on good. So let us say that a society is a cooperative venture for mutual fulllment. Rawls intends this claim to be descriptive of some actual societies, which it may be, but it is primarily normative, setting a standard, indeed the primary standard, that human societies should meet. Rawlss claim is not uncontroversial. But its very great merit is that it neither postulates nor requires that society has any ends other than enabling its members to seek to fulll their own values or ends, in conjunction with the ends of their fellows. No doubt humans, being social creatures, will share substantive ends with some of their fellows, but sharing particular concerns will arise voluntarily and not be required by their society. Marriages are not arranged; the partners choose each other. I do not propose here to defend the liberal view that values and ends, rights and objectives, are at base individual, and that cooperation is the core of human sociability. That would far exceed the scope of this present article. I shall simply suppose that we accept the normative idea
21. John Rawls, A Theory of Justice Cambridge, MA: Harvard University Press, 1971, 4. 22. Advantage, Ms. S. Williams, in the words of the tennis umpire.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

Gauthier

Twenty-Five On

617

of society as a cooperative venture for mutual fulllment. The expectations, the rights and roles, that constitute the fabric of society, must properly be grounded in the conditions required for fulllment. And together with this idea of society, I shall suppose that we think of a person as an autonomous cooperator who may demand of society that it defend the normative claims it makes upon him, that it show him he must rationally accept them. IX. THE CONTRACTARIAN TEST How may it do this? Persons are born into a society, with its rules and roles, its expectations, and it seems plausible to suppose that in human history all of this normative structure is not initially differentiated from the natural environment. The same explanations, the same rationales, would apply to both. The world order may have required human actions to sustain it, but it was not for humans to challenge this order. The idea that individuals might accept or reject the given norms had no place in early thought. The recognition that norms are not found in or xed by nature, and that social norms do not reveal an invariant order, opens the door to the possibility of norms different from those in place, and thereby introduces the need for a justication of existing norms not previously envisaged why these norms, these requirements and expectations, rather than others? There are of course a host of proposed answers to this question including an appeal to the gods. But I nd no gods, and the answer that I shall defend rests on the individual evaluations with which I began, and introduces only one ctionthe social contract. We shall need that ction to distinguish the norms that should be accepted, whether or not they in fact are. The varied normative structures of society nd their rationale in the overarching idea of a cooperative venture for mutual fulllment. We should not suppose that there is a unique set of norms that t together to constitute such a venture. But I shall pass by the complications this creates, and focus on eligibility; a normative consideration is eligible just in case it could reasonably be accepted as part of some cooperative society. When would it be accepted? To take any moment within society would privilege the persons present in that moment, and the path by which they reached it. And of course, there is usually no act by which persons signal acceptance or rejection of the normative structure of their society, and no reason to believe that if an attempt were made to introduce such an act, it would be successful. Americans are always pledging allegiance to their ag and the union for which and so forth but that is not to show that the values and norms of American society are appropriate for a cooperative venture for mutual fulllment.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

618

Ethics

July 2013

The expectations and requirements that form much of the fabric of our social lifeand humans are not solitary creaturesand the institutions and practices that underpin them, are not to be decided by overt agreement. Another story is needed, and that is the story of the social contract, the hypothetical agreement that does ground the norms and values that we as rational cooperators must normatively make the object of our agreement and consequent acceptance. The key idea is that the best justication we can offer for any expectation or requirement is that it could be agreed to, or follow from what could be agreed to, by the persons subject to it, were they to be choosing, ex ante, together with their fellows, the terms of their subsequent cooperation. The hypothetical nature of the justication is clearif, per impossibile, you were to be choosing, together with your fellow humans, the terms on which you would interact with them, then what terms would you accept? Those are the terms of rational acceptance, the terms that you, as a cooperator, have good reason to accept given that others have like reason. This is the weak contractarian test.23 Take a proposed normative requirement or expectationany one. Ask whether it could be included as part of the normative structure of a society to which you could reasonably agree were you, together with your fellows, able by everyones agreement to choose that structure. Now extend this to everyoneask whether the requirement or expectation could be included as part of the structure to which everyone could reasonably agree were they able by universal agreement to choose that structure. If we can answer the questions afrmatively, then the proposed practical consideration passes the contractarian test and is eligible for inclusion in an actual society that constitutes a cooperative venture for mutual fulllment. A person in such a society who failed to fulll the requirement or expectation would be rightly open to criticism and perhaps sanctions, although the case for sanctioning noncooperators is not one that I can try to make here. The weak contractarian test does not provide a full justication for any requirement or expectation that passes it. That a consideration could pass the test does not show that it must be part of every acceptable set of practices and expectations, but only that it is eligible for membership in some such set. Fully to justify the consideration in the real world, we should need to show that it passes the test and is part of our actual set of norms and values. The test, like best reply and Pareto-optimality in their different contexts, functions only as a necessary condition. Of course, some considerations will pass a stronger test. We could ask, not whether the requirement could be included as part or derived from a mutually acceptable social structure, but whether it must be in23. I introduce the contractarian test in Political Contractarianism, Journal of Political Philosophy 5 1997: 13248; see 132.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

Gauthier

Twenty-Five On

619

cluded in any acceptable social structure. No alternative to such a consideration could be reasonably agreed to by individuals choosing their social structure from an ex ante perspective. The contractarian test will not yield assessments of norms that are invariant over time. For what could be reasonably agreed to will to some extent depend on the possible alternatives, and these change with the advance of technology and understanding. New ways of living introduce new paths to mutual fulllment and new forms of fulllment, but they also close off older paths and in some cases older forms of fulllment. The life of a Victorian railway stationmaster is not open to us today. The contractarian test links the roles, rights, responsibilities, and expectations that govern much of our social interaction with the terms of cooperation for mutual fulllment. It licenses us to speak of the social contract, as setting out these agreed terms, to which we, as cooperators, are all bound. But the contract is, as I have said, a ction. It is the agreement we would make, could we but choose together the terms of interaction among us. And this gives rise to an obvious objection to our argument. I have claimed that passing the contractarian test provides the best justication we can provide for social institutions and expectations. But a critic will reply that an actual contract or agreement provides some measure of support for taking ourselves to be bound by it. But a hypothetical contract only binds hypothetically, and this is no binding at all. If you had approached me yesterday, I would have sold you the painting stored in the attic for $50. But now that it has been identied as a sketch by Lawren Harris,24 I would want $50,000 for it. The contract that I would have made yesterday has no power to bind me today. I have no objection to the claim that only real contracts bind, as a general thesis about contractual justication. But the force of the social contract is not found simply in its being an agreement. Rather its force lies in its being the nearest approximation to an agreement in a context in which literal agreement is not possible but would be desirable. We cannot literally choose the terms of our interaction, but we can determine what terms we would rationally choose, from an ex ante standpoint that does not privilege the actual course that our interaction has taken. In this way we bring society as close as is possible to the status of a voluntary association. The objection that the test involves only hypothetical agreement has matters the wrong way round. Actual agreement would not show that the terms agreed to were rational, since it privileges existing circumstances. The contractarian test, in taking the ex ante perspective, removes that privilege. A quite different objection is that the appeal to agreement is redundant. If it is rational to act in a certain way, then of course it is rational
24. A member of Canadas most renowned school of painters, the Group of Seven.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

620

Ethics

July 2013

to agree so to act. And if it is not rational to act in a certain way, then it is irrational to agree so to act. This objection simply misses the point of the contractarian enterprise. For the contractarian supposes that the rationality of acting in certain ways is established by showing that it would be rational to agree so to act, under suitably constrained circumstances. And showing it to be rational to agree is not to show that the act would be independently rational. Agreement does no work in the objectors view, whereas from the contractarian perspective it provides the key to rational cooperation. A third objection is that the appeal to ex ante agreement does not yield a sufciently determinate result. What would be determinate? I have suggested that there will be more than one way of structuring a society so that it offers mutual fulllment. But from a given technology and understanding of social phenomena, the differences may be relatively minor. In my paper Political Contractarianism, I sketch some of the features of any society that I argue would in our circumstances not merely pass the contractarian test, but be required in that their absence would fail the test.25 These features concern the availability of a range of productive activities open to each individual that would be rewarded sufciently to enable her to choose and follow a fullling life plan. I put particular emphasis on the availability to each of the opportunity to develop her capacities and educate her affections. We are of course far from realizing a society that would pass the contractarian test, but we can survey the gap between the actual world and the ideal and recognize the steps that have been taken to close it, as well as some of the steps that need to be taken. I note in Political Contractarianism that the contractarian test may be read as an interpretation of the principle of maximin relative benet i.e., proportional gain appropriate to political agreement on terms of interaction.26 Rational choice gives us an outcome in utility space that rational persons would seek to reach, applying the principle of maximin proportional gain. Deliberative rationality gives us the way in which rational cooperators would interact, applying the contractarian test. I do not attempt to prove that the two perspectives harmonize but, rather, I assume that the same rationality must be manifest in each perspective, so that the results of applying them are necessarily harmonious. To defend the univocity of rationality would be a task for another occasion. X. THE LOCKEAN PROVISO The nal topic I want to consider begins from the recognition that cooperation is not always possible, either because the situation affords no
25. Gauthier, Political Contractarianism, 13839. 26. Ibid., 139n.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

Gauthier

Twenty-Five On

621

benet to cooperators, or because there is no will to cooperate on the part of some of the agents. How should the would-be cooperator respond? I noted earlier that what she should do when others are unwilling to cooperate in an acceptable way is a matter not directly addressed by a Paretooptimizing account of rational choice. I want now to argue that it is indirectly addressed. In the absence of cooperation, one might suppose that agents would fall back on maximization. But Pareto-optimizers are not maximizers, and I have rejected the view that maximization underlies Pareto-optimization. They are alternative procedures for decision or choice, and what underlie both are the valuations, the utilities of the agents. How the would-be cooperator should act when cooperation is not to be had need not be determined by what noncooperators would consider rational. Instead, we may ask whether there are ways of acting antithetical to cooperation which may enter in to noncooperative situations, as options to be avoided. What might these options be? Cooperators seek mutual fulllment. Anticooperators, as one might call them, seek benet to one person at the expense of another. We may now introduce the Lockean proviso. The proviso, in the generalized form that I shall consider, provides a constraint on noncooperative actions by would-be cooperators. Locke defended the appropriation of commons as private property, provided the appropriator left enough, and as good for others.27 If this condition, the Lockean proviso, is not met, then the appropriation would better the situation of the appropriator by worsening the situation of the others. Instead of the win/win orientation of cooperation, violators of the proviso adopt a win/lose stance. In interaction, they seek to win, at the expense of others who lose. The would-be cooperator, although he may be frustrated in his attempt to achieve the win/win of agreed Pareto-optimization, will signal his readiness to cooperate by avoiding the win/lose stance. Thus he differs from the maximizer even in the absence of actual cooperation. But equally, he will do all he can to avoid being the victim in a lose/win outcome. He will, as Hobbes argues, seek peace, and follow it, while by all the means he can, . . . defend himself .28 In Morals by Agreement I brought the proviso into play in establishing the value of the cooperative minimum, the utility an agent could expect without any cooperative benets. I argued that any utility gained from violations of the proviso must be excluded from this minimum. There are large issues here which I must bypass. And I have come to think of the proviso as playing a broader role, as providing a constraint, both ra27. John Locke, The Second Treatise of Government, in Two Treatises of Government, ed. Peter Laslett Cambridge: Cambridge University Press, 1988, 291. 28. Hobbes, Leviathan, 80.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

622

Ethics

July 2013

tional and moral, on the justiability of all interaction. The proviso is not the whole of morality or even the last word, but it is, I believe, the rst word. It provides a default condition that may be appealed to set a baseline for social interaction. It can be overridden in many contexts when there is good reason to think that the override would be mutually benecial and part of a practice that would pass the contractarian test. I cannot defend these large claims beyond the essential rst step of noting the prima facie incompatibility of proviso violations and cooperation. I shall only clarify what the proviso allows and what it rejects. The Lockean proviso, in its generalized form, prohibits actions that better one persons situation by worsening that of another. There are three contexts to consider: 1. Two-party situations: The proviso prohibits an agent A, from acting to better his situation from what it would be in the absence of the other person or group B, by worsening Bs situation from what it would be in the absence of A. Suppose a group of raiders carry off the sheep raised by a community of pastoralists. Then the raiders violate the proviso. They are better off than were there no pastoralists to steal from, and the pastoralists are worse off than were there no raiders to steal from them. 2. Three-party situations, type 1 violations: The proviso prohibits an agent A, from acting to better the situation of another agent or group B from what it would be in the absence of a third person or group C, by worsening the situation of C from what it would have been in the absence of B. Robin Hood A steals the harvest from the estates of the Sheriff of Nottingham C , in order to distribute it among the poor inhabitants of Sherwood Forest B . Had there been no poor, Robin would have left the harvest alone; had there been no Sheriff, Robin would have had nothing to steal. 3. Three-party situations, type 2 violations: The proviso prohibits an agent A, from acting to better the situation of another agent or group B from what it would be in the absence of A, by worsening the situation of a third person or group C from what it would have been in the absence of A. As before, we have Robin A , the forest poor B , the Sheriff C absent Robin, the Sheriff would have kept his harvest and the poor their poverty. We can add to our brief stories further details that will affect our sympathiesthe Sheriff and his men were hardworking farmers and the forest poor a bunch of lazy layaboutsor the Sheriff had enclosed the Nottingham commons for his estates, and the forest poor were the dispossessed commoners. But eliciting sympathy is not my concern. Robin Hood is a proviso violator whatever our sympathies, although we may of

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

Gauthier

Twenty-Five On

623

course think that in some circumstances the proviso should be overridden. I have yet to comment on the importance of by. The gain for one person is achieved by the loss for another. Omit the loss and there is no gain. The worsening is not incidental, not collateral damage. The proviso is not concerned with collateral damage. To be sure, we do not want to ignore such damage in deciding what we may do. But how we assess it is not my present concern. What should be my present concern is the insightful discussion of the Lockean proviso in Gijs van Donselaars excellent book, The Right to Exploit.29 I would hope that the proviso can be sharpened as a weapon of reason against parasitism, but someone with a fresher mind is needed to pursue this. XI. HERE COMES THE TROLLEY!? I shall instead conclude by applying the proviso to the trolley problem.30 I shall assume familiarity with the problem and set out only the salient details. A trolley car is starting down a long incline when its brakes fail. The motorman throws the motors into reverse but to no avail; the car continues to gain speed. At the end of the long incline is a narrow cutting where, oblivious to the oncoming car, ve people are walking on the track. If the trolley hits them, their death is all but certain. However, just before the cutting, a sidetrack diverges from the main. I am standing there and could throw the switch sending the car off on the sidetrack. But standing on the sidetrack, back to me and the trolley, is a deaf man. If I divert the trolley, it will save the ve, but kill him. May I throw the switch? Most people say Yes. Now a variant. Just before the cutting, there is an overpass, where I am standing. Beside me is a very obese individual, who is leaning over, somewhat off balance, trying to tie his shoelace. I realize that were a sufciently heavy object to fall from the overpass in front of the trolley, the collision would in all likelihood derail it and bring it to a stop before it hit the ve. The obese man is, I am quite sure, sufciently heavy. If I push him he will fall onto the track, and while the collision will kill him, it will
29. Gijs van Donselaar, The Right to Exploit: Parasitism, Scarcity, Basic Income Oxford: Oxford University Press, 2009. How could I not love a book that contains the sentence Gauthier is right 83 even if its scope is restricted to a rather central point in interpreting Locke? 30. Philippa Foot, The Problem of Abortion and the Doctrine of the Double Effect, reprinted from the Oxford Review 1967 in Virtues and Vices Oxford: Blackwell, 1976, 1932; see esp. 23, 28; Judith Jarvis Thomson, Killing, Letting Die, and the Trolley Problem, Monist 59 1976: 20417, and The Trolley Problem, Yale Law Journal 94 1985: 13951415.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

624

Ethics

July 2013

save the ve. May I push him? Most people say No. But in each case, either one dies or ve die. What is the difference that gives rise to these opposing responses? In the rst case, if I throw the switch, I better the situation of the ve from what it would be in my absence, and worsen the situation of the one from what it would be in my absence, but I do not do the one by doing the other. The death of the one is collateral damage, not a necessary part of my rescue action but an unwelcome accompaniment. So there is no type 2 violation of the proviso. And since I do not better the situation of the ve from what it would be in the absence of the one, there clearly is no type 1 violation. Contrast this with the second case. If I push the obese man, I better the situation of the ve from what it would be in my absence. And I do this by worsening his situation from what it would be in my absence. So there is a type 2 violation. And I better the situation of the ve from what it would be in the obese mans absence, while worsening his from what it would be in their absence. So there is also a type 1 violation. Invoking the proviso seems to me quite satisfying in the classic trolley problem. It cant be the whole story; in particular, as I noted, the proviso is silent on collateral damage. Suppose there were four persons on the sidetrack. . . . But again I must leave these matters to fresher minds. AND IN CONCLUSION I must also leave discussion of the full role of the proviso. For it brings a rejection of parasitic behaviorfor example, that exhibited by the raiders in relation to the pastoralistsinto our account of practical rationality. And I realizeI cant help but realize after Van Donselaars critique31that there are awsto put it mildlyin the argument in Morals by Agreement that purports to link rationality to the proviso. We may have better success if we consider rational agents to be agreed Pareto-optimizers rather than constrained maximizers. But I have not examined this. The prohibition on bettering by worsening seems to me to lie at the core of any adequate social morality. If I knew more precisely how to tie the proviso to rationality, I might nally have shown what has long been my goalthat social morality is part of rational choice, or at least, integral to rational cooperation. I should like to think that the position I have advanced here brings me closer to that goal though not yet reaching it than does Morals by Agreement.

31. Van Donselaar, The Right to Exploit, chap. 2, esp. 4251.

This content downloaded from 143.107.252.82 on Tue, 13 Aug 2013 08:47:26 AM All use subject to JSTOR Terms and Conditions

S-ar putea să vă placă și