Sunteți pe pagina 1din 20

OLD ADAMS BURIED

Ian Rumfitt

Abstract. I present some counterexamples to Adamss Thesis and explain how they undermine arguments that indicative conditionals cannot be truth-evaluable propositions.

Grant that the old Adam in these persons may be so buried, that the new man may be raised up in themBaptism of those of Riper Years, Book of Common Prayer

1.

Adamss Thesis

According to Ernest Adams (1975), a thinker is more or less willing to accept the indicative conditional
!

If A then B" according as he is more or less confident that B, given A, or is more or less confident that

B on the supposition that A is true. Adamss Thesis, as I shall call the doctrine of the previous sentence, is very widely accepted by theorists of conditionals. David Lewis describes it as well established (1986b, 581); according to Frank Jackson, the Thesis is essentially right (1979, n.17); Dorothy Edgington (1995, 5) presents a wide range of examples that confirm it. Most adherents of Adamss Thesis go further, and postulate that a thinkers degrees of confidence (or degrees of belief) in the truth of various propositions may be represented by numerical values between 0 and 1. Thus, for a given subject at a given time, they posit a degree of belief function b that maps propositions into the unit interval [0, 1] in such a way that b(X) = 1 when and only when the subject is certain of the truth of X, and b(X) = 0 when and only when he is certain of Xs falsity. It is assumed (admittedly as an idealization) that a rational thinkers belief function conforms to the laws of a probability distribution. Given this representation, a thinkers conditional degree of confidence that B, given Awritten b(B|A)may be identified with the ratio of his (unconditional)

degree of confidence in the truth of the conjunction !A and B" to his (unconditional) degree of confidence in the truth of A. Thus, assuming that the thinker is not certain of As falsity, so that b(A) # 0, we have b( B|A) =b(A $ B)/b(A). Within this framework, Adamss Thesis may be reformulated as what Edgington has called the Equation (Edgington 1995, 271):

b(B if A) = b(B|A) = b(A $ B)/b(A), when b(A) # 0.

I shall follow Edgington in setting aside the case where b(A) = 0: I agree with her that the indicative conditional is used only if the antecedent is taken as an epistemic possibility, as not certainly false, by the speaker (thinker)at least for the sake of argument, at least temporarily, at least to co-operate with her audience (op. cit., 264-5). So we need not worry what to say about !If A then B" when we are certain that A is false. Acceptance of Adamss Thesis, and the Equation, is apt to shape ones entire approach to the problem of giving a semantic theory for indicative conditionals. In particular, accepting the Equation provides the basis for a powerful argumentforcibly expounded by Edgington (1986, 1995)that indicative conditionals are not propositions. That is, they are not statements that can coherently be assessed as true or as false. For if they were propositions, and if the Equation were true, then they would have to be propositions that satisfied the Equation. That is:

We seek an X such that b(X) and b(B|A) = b(A $ B)/b(A) cannot coherently come apart: an interpretation such that for all consistent distributions of belief over the relevant domain, b(X) = b(A $ B)/b(A)The bombshell is that no proposition at all satisfies the Equation. If we stick by [Adamss] Thesis, we must not think of conditionals as propositions, as truth bearers. If belief that if A, B fits the Thesis, it is nonsense even to say things of the form If A, B is true if and only if, if A, B. Your degree of belief that B is true, on the supposition that A is true, cannot be consistently and systematically equated to your degree of belief that something is true, simpliciter (1995, 271).

The first proof of this bombshell was given by David Lewis (1976; see also Lewis 1986a and b) although, as we shall see, later writers have argued for the same conclusion from weaker premisses. Edgington concludes her discussion of some of these arguments as follows:

It is an empirical question how well [Adamss] Thesis fits our practice of assessing conditionalsButto say the leastthere could be people who use if in this way. The result tells us that they do not use if to express propositions, evaluable in terms of truth. A previously unnoticed test for the applicability of the concept of truth has presented itself: if judgements of a given type are subject to uncertainty, do uncertain judgements of this type fit the structure appropriate to uncertainty about truth bearers? (1995, 280)

I agree with Edgington that we have a potentially powerful form of argument to show that certain judgements, or the sentences that express them, are not truth bearers. In this paper, though, I want to explain why the argument fails in the present case. Insofar as Adamss Thesis and the Equation are understood to describe our actual use of indicative conditionals, there are counterexamples to both of them. Moreover, the counterexamples undermine the various versions of the bombshell argument. We should not accept Adamss Thesis or the Equation, and when we see why we should not accept them, we shall see that indicative conditionals may well be propositions.

2.

A simple, mathematical counterexample to Adamss Thesis and the Equation

For a first example to show this, let us consider the conditional If Goldbachs Conjecture is true, then 0=1, which I shall abbreviate as If G, then F. My degree of belief in G , the antecedent, is not zero: I am not certain that Goldbachs Conjecture is untrue. So the present conditional meets the requirement for applying the Equation: where b represents my current degrees of belief, b(G) # 0. However, my degree of confidence in the truth of the consequent, F, is zero: I am certain that 0!1. That is: b(F) = 0. Consequently, I am also sure of the falsity of the conjunction Goldbachs Conjecture is true and 0=1, so b(G $ F) = 0 and the crucial ratio b(G $ F)/b(G) is zero. This seems right inasmuch as we take that ratio to measure our degree of conditional confidence, or conditional belief: since I am certain that F is

false, I am also certain that F is false, given G, or on the supposition that G is true. So my degree of confidence in the truth of F, given G, is zero. We should accept, then, that b( F given G) is zero. If Adamss Thesis or the Equation were correct, this would mean that no rational thinker has the slightest tendency to accept If G, then F: this conditional would be just as absurd and unacceptable as its consequent, F. But I think it is clear that we do not understand If Goldbachs Conjecture is true, then 0=1 as expressing an absurdity. To the contrary, we hear it as saying something tantamount to Goldbachs Conjecture is not true, which might well turn out to be the case. Certainly, if a mathematician derived If G, then F from accepted axioms, he would not take himself to have derived an absurdity from those axioms. To the contrary: he would announce that he had refuted Goldbachs Conjecture. We have a case, then, where a thinkers degree of belief in F, given G, is zero, but where he rationally assigns a non-zero probability to If G, then F. Indeed, someone who has derived a contradiction from Goldbachs Conjecture will accept this conditional. This is contrary to both Adamss Thesis and the Equation. It seems quite obvious to me that If Goldbachs Conjecture is true, then 0=1 is tantamount to Goldbachs Conjecture is not true. I am clearly not alone in finding this obvious: some intuitionist mathematicians and logicians define ! A" as !A % 0=1". To be sure, % in this definiens is the conditional connective of intuitionistic logic, not the English ifthen; but the formal connective is supposed to match the vernacular construction reasonably closely, and the definition would clearly fail if !A % 0=1" were invariably absurd. Defenders of the Thesis may respond to this first example by claiming that conditionals bear a special sense in mathematical contexts. On this view, the example does not threaten Adamss Thesis (or the Equation) for ordinary, non-mathematical conditionals. It is a nice question what the postulated special, mathematical sense of the conditional could be. The case of the intuitionist shows that it is not always that of the material conditional. In intuitionistic logic, !A & B" entails !A % B" which in turn entails !(A $ B)", but the converse entailments both fail, so !A % B" is equivalent to neither form of the material conditional. For reasons I shall give later, it is very implausible to hold that the conditional bears a special sense in mathematical contexts. I need not argue this at present, though, for there are counterexamples to Adamss Thesis of the same form outside mathematics.

3.

Sly Pete revisited

To see this, it helps to revisit Allan Gibbards famous story of Sly Pete (Gibbard 1981). This concerns a poker game played between Pete, a river-boat gambler, and an opponent, and involves two interested bystanders (Jack and Zack) who have to leave the gaming room before Pete decides whether to call. Pete is a monstrous and effective cheat: he has a way of finding out what cards his opponent is holding, and he always plays to win. Zack knows this, so he judges If Pete called, he won. Jack, by contrast, does not know about Petes cheating, but he has seen both players cards and knows the rules of the relevant variety of poker. On that basis, he knows that Pete has the losing hand, so he judges If Pete called, he did not win. Edgington gives an interesting analysis of Gibbards example. On her view, it provides another reason (independent of the bombshell) to deny that conditionals are truth bearers:

This is [the examples] structure as I see it. (1) If two statements are compatible, so that they can both be true, a person may consistently believe both of them simultaneously. (2) For consistent A, and any B, people do not simultaneously believe both If A, B and If A, B (unless by oversight), nor consider it permissible to do so; rather, to accept If A, B is to reject If A, B. (This accords with [Adamss] Thesis: if b(B|A) is high, b(B|A) is low) So, by (1), If A, B and If A, B cant both be true: if they could, why shouldnt someone readily accept both? But (3) one person X can have impeccable reasons for believing If A, B, while another person Y has impeccable reasons for believing and If A, B; (a) the situation is symmetric: there is no reason to prefer Xs belief to Ys, or vice versa; no case can be made for saying just one of the beliefs is false; (b) neither of them is making any sort of mistake; each is rational and bases his judgement on known truths; no case can be made for saying both beliefs are false. So: they cant both be true, they cant both be false, and it cant be that just one of them is true. Truth and falsity are not suitable terms of assessment for conditionals (Edgington 1995, 293-4).

I agree with what Edgington says under (3a). As she observes (op. cit. 294, against Pendlebury 1989), even if Gibbards example is not perfectly symmetrical, it is easy to construct one that is to the same

purpose, so that !If A, B" and !If A, B" cannot be given different truth-values. I also agree with what she says under (3b). Both Zacks and Jacks judgements are fully rational and based on known truths; accordingly, it is hard to sustain the claim that they are both false. Finally, I agree with her point (1). I want to argue, though, that in Gibbards case !If A, B" and !If A, B" are both true. That means rejecting Edgingtons claim (2). What the Gibbard case shows is that it is rationally permissible simultaneously to accept !If A, B" and !If A, B". As she says, this is inconsistent with Adamss Thesis. I say: so much the worse for the Thesis. To see why this is the correct account of the case, let us imagine that Zack, who left the gaming room slightly earlier than Jack, asks Jack who won the game. Jack replies I dont know; I didnt stay that long. But I saw both players cards and I can tell you this much: if Pete called, he didnt win. Zack, we may further suppose, knows Jack to be a wholly trustworthy informant who knows the rules of poker. Finally, we suppose that Zack has read Daniel Kahnemans Thinking, Fast and Slow (2011), and so is aware that people are liable to make mistakes by jumping quickly to conclusions. Zack has resolved to counteract that tendency by refraining from making deductive inferences from his own knowledge, or from what others tell him, until he has laid out the appropriate argument in the style prescribed by his undergraduate logic textbook, which set forth a natural deduction system. So, in response to Jacks testimony, Zack does not immediately reach any judgement about what happened in the gaming room. Instead, he writes down the deduction below. In doing so, he uses !A % B" to symbolize the indicative conditional !If A then B", as I shall do myself from now on. He also uses A to symbolize Pete called and B to symbolize Pete won.

1. 2. 3. 4. 5. 6.

A%B A % B A % ( B $ B) A B $ B A

Premiss: from Zacks knowledge of Petes proclivities Premiss: from Jacks testimony 1, 2 using From A % B and A % C infer A % ( B $ C) Hypothesis 3, 4 modus ponens 4, 5 reductio ad absurdum, with discharge of hypothesis A

I think that Zacks reasoning is a counterexample to Edgingtons claim (2). According to that claim, it is never rationally permissible simultaneously to believe !If A, B" and !If A, B". Per contra , the beliefs Zack expresses by (1) and (2) seem wholly rational; indeed, they appear to have the status of knowledge. Moreover, Zack holds them held simultaneously. Indeed, it is vital to the deduction that he does so: whilst (1) and (2) jointly entail (3), neither entails (3) by itself. Moreover, line (3) of Zacks deduction is a non-mathematical case of the same structure as that given in 2. In accepting (3), Zack rationally accepts a conditional whose antecedent expresses an epistemic possibility at the time of acceptance, but whose consequent he regards as certainly false. This is contrary to Adamss Thesis and the Equation.\1/ How was Edgington led to overlook the threat Gibbards case poses to the Thesis? She writes: In fact, Pete didnt call. Once they learn this, neither [Jack nor Zack] has any use for a thought beginning If Pete called (1995, 294). We may accept this too: as noted earlier, speakers use the indicative conditional only when the antecedent expresses an epistemic possibility. However, it would be a mistake to think that this point somehow de-legitimates Zacks simultaneous acceptance of premisses (1) and (2) and his consequent acceptance of (3). Zack is a Kahnemanian slow thinker, so he does not learn that Pete did not call until he reaches the conclusion of his deduction. Accordingly, Edgingtons observation does not gainsay the contention that Zack simultaneously believes !If A, B" and !If A, B" before reaching that conclusion. After all, one can surely rationally gain a new belief as a result of deductive argument. In fact, it is hard to see how Zack can be said to have learned that Pete did not call unless he continues to know (1) and (2) even after the deduction is concluded. To learn is to come to know, and it is hard to see how Zack can be said to know (6) unless he retains knowledge of the premisses from which it was inferredviz., (1) and (2). I maintain, then, that a rational thinker can simultaneously believe both !If A, B" and !If A, B". As Edgington says, this is contrary to Adamss Thesis and to the Equation: b(B|A) and b(B|A) cannot simultaneously be high. Indeed, we now have a non-mathematical example to confirm my earlier

Zack could have used modus ponens and reductio to reach (6) directly, without going via (3). However (a) a deduction may still be correct even if it is not the shortest possible derivation of its conclusion from its premises and (b) the rule used in reaching (3)viz., from !If A, B" and !If A, C" infer !If A then (B and C)"is, I think, universally accepted by theorists of conditionals. (Certainly, it is a sound rule in the logic for conditionals that Edgington recommends.) I had Zack make the detour via (3) to bring out the connection between Gibbards example and my earlier claim that there are assertible conditionals with absurd consequents and non-absurd antecedents.

claim that a conditional with an absurd consequent can still be rationally assertible. Zack is perfectly entitled to assert (3), i.e. If Pete called, then he both won and did not win. Contrary to Adams, though, Zack has no tendency to accept Pete both won and did not win, given that Pete called.

4.

Bradleys bombshell defused

Both the examples to Adamss Thesis given so far involve a thinkers accepting a conditional of the falsity of whose consequent he is certain. It is rare for people to be certain of anything; some philosophers maintain that it is irrational to be certain even that 0!1, or that Sly Pete did not both win and not win the hand. So one might wonder if the force of the bombshell could be preserved by making an exception of cases where a thinker is certain of the consequents falsity. In assessing this suggestion we need to discriminate between different versions of the bombshell. To some versions, notably Richard Bradleys (2000), the treatment of conditionals with certainly false consequents is crucial. Bradley proposes a Preservation Condition that he regards as immediately and rationally compelling in a way that Adamss Thesis is not (Bradley 2000, 219). Bradleys Condition says that if a subjects degree of belief in a statement A is greater than zero, but his degree of belief in B is zero, then his degree of belief in the conditional !A % B" must be zero. On the assumption that conditionals respect his Condition, Bradley proves that they cannot be treated as truthevaluable propositions so long as there is a single conditional such that neither it nor its antecedent implies its consequent. (As Bradley says, the use of conditionals would be trivialized if this last condition were not met.) Bradleys Preservation Condition is a special case of Adamss Thesis; his version of the bombshell shows that this special case is a crux for the claim that conditionals have truth-conditions. Our examples, though, show that his Condition is far from being immediately and rationally compelling. For those examples contradict Bradleys Preservation Condition, as well as Adamss Thesis itself. My degree of belief in Goldbachs Conjecture is true is greater than zero; my degree of belief in 0=1 is zero; but my degree of belief in If Goldbachs Conjecture is true, then 0=1 is greater than zero. Similarly, at least at the start of his deduction, Zacks degree of belief in Pete called is

greater than zero; his degree of belief in Pete both won and did not win is zero; but his degree of belief in If Pete called, he both won and did not win is greater than zero. Bradley calls his principle a preservation condition because of its origins in the theory of belief revision. That theory postulates that a thinkers outright beliefs may be represented by a deductively closed set, K, of statements in an appropriate language, and that, for any statement A of whose falsity the thinker is not certain, there is a well-defined deductively closed set KA*, representing what he conditionally accepts as true when he supposes that A is true. In a famous footnote to his paper General propositions and causality, Ramsey remarked that if two people are arguing if p , will q ? and are both in doubt as to p , they are adding p hypothetically to their stock of knowledge [or belief] and arguing on that basis about q (Ramsey 1929, 155). Within the framework of the theory of belief revision, Ramseys Test is duly formulated as the thesis that the conditional !A % B" belongs to K if and only if B belongs to KA*. Our cases are also counterexamples to the only if half of this thesis: if I derive a contradiction from Goldbachs Conjecture, then If G then F will belong to my belief set K even though F does not belong to KG*. In a recent paper, Bradley (2007) has defended the present formulation of Ramseys Test against its critics. His defence, though, rests on the following principle of conditional contraction: (CC) if K, B, and C (collectively) yield a contradiction, then so do K, !A % B", !A % C", for any statement A (Bradley 2007, 5). Bradleys (CC) rule, however, immediately entails that !A % 0=1" is contradictory no matter what A might be (take K to be empty and B = C = !0=1"). So against the present counterexamples to Ramseys Test, an appeal to (CC) simply begs the question.

5.

Lewiss version of the bombshell

David Lewiss original version of the bombshell (Lewis 1976) differs from Bradleys in that it does not depend on how we treat conditionals with absurd consequents. The nerve of Lewiss argument is this. For any bona fide propositions X and C, a thinkers degree of belief in X will be the sum of his degree of belief in !X $ C" and his degree of belief in !X $ C": b(X) = b(X $ C) + b(X $ C). Since

b(B|A) =b(A $ B)/b(A) when b(A) # 0, it follows that b(X) = b(X|C).b(C) + b(X|C).b( C) so long as neither b( C) nor b(C) is zero. Let us assume, for reductio , that the indicative conditional !A % B", is a bona fide proposition. Then we have

b(A % B) = b(A % B|B).b(B) + b(A % B|B).b(B),

so long as neither b(B) nor b(B) is zero. In order to develop the argument further, we need to assign values to b(A % B|B) and b(A % B|B). What should those values be? The Equation tells us that b(A % B) = b(B|A), but what is b(A % B|C)? On the assumption that !A % B" is a bona fide proposition and that the Equation is true, there is only one legitimate approach to this question. According to the Equation, for any proposition D and for any proposition C such that b(C) # 0, b(D|C) = b( C % D). Assuming that !A % B" is a proposition, then, b(A % B|C) = b( C % (A % B)). Now the logical law of importation says that !C % (A % B)" logically entails !( C $ A) % B", and that of exportation says conversely that !( C $ A) % B" entails
!

C % ( A % B)". So if both of these laws are valid then !(C $ A) % B" and ! C % (A % B)" are

equivalent. If we also assume that this equivalence is so obvious as to constrain the attribution of degrees of belief to rational thinkers, then b(C % (A % B)) = b((C $ A) % B).\2/ Then, by the Equation, b(A % B|C) = b(B|A $ C), so long as b( A $ C) # 0 (in which case b( C) # 0 a fortiori). In particular, then, b(A % B|B) = b( B|A $ B), so long as b(A $ B) # 0, and b(A % B|B) = b(B|A $ B), so long as b(A $ B) # 0. Let us assume, then, that neither !A $ B" nor !A $ B" is certainly false, so that b(A $ B) # 0 and b(A $ B) # 0. Then b( A % B|B) = b(B|A $ B) and b(A % B|B) = b(B|A $ B). It is clear, though, that b( B|A $ B)=1 and b(B|A $ B)=0: any rational thinker will be certain that B is true, given the truth of !A $ B", and he will be certain that B is false, given the truth of !A $ B". So, whenever

In Lewiss own exposition, the validity of importation and exportation, and their status as regulating rational belief, are wrapped up into an assumption that the class of rational assignments of degrees of belief is closed under conditionalization (see Lewis 1976, 301). But it is worth making explicit the role that these logical laws play in his argument.

10

b(A $ B) # 0 and b(A $ B) # 0, b( A % B) = 1.b( B) + 0.b(B) = b(B). As Lewis observes, this is an absurd result. It says that if neither !A $ B" nor !A $ B" is certainly false, then the degree of belief in the conditional !A % B" must be the same as the degree of belief in its consequent, B. To avoid the absurdity, Edgington claims, we should reject the hypothesis that !A % B" is a bona fide proposition. (More particularly, we should reject the hypothesis that !A % B" can be conjoined with an arbitrary proposition.) Lewiss argument differs from Bradleys in that the consequent B is assumed not to be certainly false. So even if Adamss Thesis needs to be restricted to conditionals with non-absurd consequents, it may seem to matter little; for Lewiss version of the bombshell still explodes.

6.

A more complicated mathematical counterexample to Adamss Thesis

I agree with Lewis and Edgington that the loss of independence between a conditional and its consequent whenever neither !A $ B" nor !A $ B" is certainly false is absurd. But I do not think that the source of the absurdity is the hypothesis that !A % B" is a bona fide proposition. I think the culprit is the claim that b(A % B|B) = 0, so long as b(A $ B) # 0. Again, a mathematical conditional provides a counterexample to this claim. Let A = Peano Arithmetic proves its own consistency and let B = Peano Arithmetic is not consistent, so that !A % B" expresses Gdels Second Incompleteness Theorem. Let b represent my degrees of belief. I maintain that my beliefs are entirely rational if b has the following properties. First, b(A $ B) # 0. To be sure, B may be derived from A, so there is (we may assume) no real possibility of As being true while B is false. Even so, the derivation is somewhat intricate so that, even after studying Gdels argument, I may rationally hold that, for all I know for certain, the proof might contain a subtle fallacy.\3/ The point is even clearer if my belief in the truth of the Second Theorem derives from anothers testimony: however reliable my source is, I may rationally think there is some chance that he has got the result wrong. In either case, then, b(A $ B) may have a small positive value. We may further assume that b(A $ B) also has a small positive value. !A $ B" is equivalent to the claim that Peano Arithmetic is inconsistent, and whilst I think that this is most
3

This lack of certainty might be reinforced by reading the exposition of Gdels Second Theorem in George Boolos and Richard Jeffreys classic textbook, Computability and Logic. They prove the Second Theorem as a corollary to Lbs Theoremand remark that their proof of Lbs Theoremresembles a certain proof of the existence of Santa Claus (Boolos and Jeffrey 1980, 186).

11

unlikely to be true, I am not certain that it is false: after all, some future genius might derive a contradiction from the Peano axioms in the way that Russell derived a contradiction from the axioms of nave set theory. The conditions for the application of Lewiss argument are met, then: b(A $ B) # 0 and b(A $ B) # 0. So Lewis is committed to claiming that b(A % B|B) = 0. What does this claim say, though? It says that I shall assign a zero probability to Gdels Second Theorem, on the supposition that Peano Arithmetic is consistent. Since I do in fact believe (to a degree just short of unity) that Peano Arithmetic is consistent, it requires no imaginative effort to discover what chance of truth I assign to Gdels Second Theorem on such a supposition. Whilst not quite unity, its value is, of course, considerably higher than zero. So a claim crucial to Lewiss argument is false. The counterexample does not show whether the flaw in Lewiss argument lies in Adamss Thesis itself or in the assumed obvious validity of the import-export equivalence. The relevant instance of import-export asserts that

(1) If Peano Arithmetic is consistent and Peano Arithmetic proves its own consistency, then Peano Arithmetic is inconsistent

is equivalent to

(2) If Peano Arithmetic is consistent, then if Peano Arithmetic proves its own consistency, Peano Arithmetic is inconsistent.

I think that (1) and (2) are obviously equivalent, so the import-export laws are not the problem here. Rather, (1) is a counterexample to the Equation. My degree of belief in Peano Arithmetic is inconsistent, given that Peano Arithmetic is consistent and Peano Arithmetic proves its own consistency, is (rationally) very low. However, the conditional (1) is highly assertible. It may be objected that the conditional (1) falls outside the scope of the Equation because it has an absurd antecedent, but the antecedent is not absurd in the relevant sense. The proof of the Second Theorem shows that the truth of Peano Arithmetic is consistent and Peano Arithmetic proves its own consistency is mathematically (and metaphysically) impossible, but it does not follow that this

12

conjunction cannot stand as the antecedent of an indicative conditional. For all anyone who has not seen Gdels proof knows, the conjunction might be true; indeed, eminently rational peoplemost notably, David Hilbertexpected it to be true. But at any rate, the counterexample shows that there is something wrong either with the Equation itself or with the assumed obviousness of the import-export laws. Since Lewiss argument depends on both of those premisses, his version of the bombshell is defused.

7.

Conditionals inside and outside mathematics

The argument that conditionals are not propositions fails, then. It rests on thesesAdamss Thesis, and the Equationto which there are exceptions. Moreover, the exceptions are highly pertinent to the bombshell arguments. For all those arguments show, conditionals might well be propositions. If conditionals were propositions, we could apply the usual account of validity to arguments involving them. On that account, an argument is valid only if its conclusion is true whenever its premisses are true. A theorist who denies that conditionals are propositions must give some other, nonalethic account of validity. Adams and Edgington propose such an account. I conclude by arguing that their account of validity can give no coherent description of a topic that is germane to the counterexamples to Adamss Thesisviz., the relationship between mathematical and nonmathematical uses of the conditional construction. Let us call the uncertainty, u, of a proposition (for a given thinker at a given time), the difference between the subjects degree of belief in the proposition and unity: u(X) = 1 b(X). Let us call an argument probabilistically valid if, for every well-behaved assignment of degrees of belief,\4/ the uncertainty of its conclusion is no greater than the sum of the uncertainties of its premisses. When an argument is probabilistically valid, then, and has a small number of premises, a rational thinker who assigns a high probability to each of those premisses will also assign a high probability to its conclusion. Such an argument preserves high probability in the sense in which a classically valid argument preserves truth.

I.e. for every assignment of degrees of belief that respects the axioms of probability theory.

13

Which arguments are probabilistically valid? Let us first restrict attention to a propositional language, L, whose connectives are negation, conjunction and disjunction. Adams proved (1975, 57) that an argument in L is classically valid just in case it is probabilistically valid. So, if we follow the classical rules for negation, conjunction and disjunction, the arguments that we construct will preserve high probability, as well as preserving truth. Adams and Edgington do not take conditionals to be truth-bearers. But the Equation affords a way of assigning degrees of belief, and hence uncertainties, to simple conditionalsthat is, to conditionals whose antecedents and consequents are ordinary propositions. Thus, given the Equation, u(A % B) = 1 b(A % B) = 1 b( B|A). In this way, the notion of probabilistic validity extends to arguments in an expanded language, L+, whose formulae include simple conditionals whose antecedents and consequents are well-formed formulae of L. As before, such an argument is probabilistically valid if and only if the uncertainty of its conclusion is no greater than the sum of the uncertainties of its premisses. Adams discovered which forms of argument in L+ are probabilistically valid. He showed, inter alia, that any argument by modus ponens in L+ is probabilistically valid, but that the classical rule of conditional proof (CP)given a deduction of B from X '{A}, construct a deduction of !A % B" from Xhas some probabilistically invalid instances in L+, as do certain classically valid rules of inference derivable from CP, notably contraposition (from !A % B" infer
!

B % A")\5/ and hypothetical syllogism (from !A % B" and !B % C" infer !A % C").\6/ How well does the probabilistic account fare in codifying the standards we use for assessing

deductions involving conditionals? In arguing that it fares badly, I want to revert to a topic touched on 2namely the relationship between reasoning with conditionals inside and outside mathematics. It is highly implausible to claim that ifthen has a special meaning in mathematical contexts. Our ordinary competence with conditionals enables us to understand their mathematical uses:

It is important in this context to distinguish the rule of inference stated in the text from a rule of proof that also goes by the name of contraposition. The rule of proof says that whenever we have a deduction of B from A, we may construct a further deduction of !A" from !B". This rule of proof is probabilistically sound, in the sense that whenever the deduction of B from A is probabilistically valid, so is the deduction of !A" from !B". See Smiley 1984, 241-2.
6

Vann McGee (1989) identified a reasonably natural way of extending the notion of probabilistic validity so that it applies to arguments in a language, L++, whose formulae include conditionals with conditional consequents. In the expanded language, some instances of modus ponens are probabilistically invalid (see McGee 1985). See, though, Lance 1991 for some difficulties with McGees extended account of probabilistic validity.

14

we require and receive no special instruction in their mathematical employment. Moreover, in every language I know about, mathematicians use one of the standard conditional forms to express mathematical conditionality. At the same time, it has to be recognized that the logical rules that mathematicians use in their deductions with conditionals are stronger than those we accept as regulating such deductions outside mathematics. The key case is that of conditional proof. Mathematical proofs are littered with applications of CP.\7/ Having deduced B from hypothesis A and axioms X, mathematicians assert
!

A % B" on the strength of X alone. It would be a brave logician, however, who tried to maintain that

conditional proof regulates our non-mathematical deductions. Given only the classical logic of negation, CP licenses the inference from ! A" to !A % B", but this is not a rule we go by in our nonmathematical deductions. We would not ordinarily regard as valid the inference from It will not rain tomorrow to If it rains tomorrow, I will win the lottery. What are we to think about this divergence? Inferentialists in the philosophy of logic, who hold that the meaning of a logical particle is given by the rules to which it conforms, will have to conclude that ifthen does after all have different meanings in mathematical and non-mathematical contexts. For the reasons just given, though, that conclusion is very implausibleand is consequently a difficulty for inferentialism. What the case really calls for, it seems to me, is an explanation of how the common meaning of ifthen interacts with special features of mathematical discourse to ensure that certain logical rules that are not generally valid are nevertheless valid when deployed in mathematical deductions. Stalnakers (1968) semantics for indicative conditionals provides a nice example of such an explanation. He classifies !A % B" as true (false) according as B is true (false) at the nearest possible world where A is truethe nearest A-world, in the jargon.\8/ On this account, conditional proof will have many invalid instances. If B follows from X together with A, then B is true at any world where all the member of X '{ A} are true. Given that all the members of X are actually true, though, we cannot

They are also littered with uses of contraposition, but mathematical proofs only need the rule-ofproof form of contraposition, which is probabilistically sound, not the stronger rule-of-inference form. See n.5 and Smiley op. cit..
8

Note, though, that some of our counterexamples to Adamss Thesis are also counterexamples to Stalnakers semantic theory. The formula !0=1" is false at every world, and hence is false at the nearest world where A is true, no matter what A may be. According to Stalnakers semantics, then, !A % 0=1" is false whatever A may be, contrary to the argument of 2.

15

even conclude that !A % B" is actually true: one of the members of X may be false at the nearest Aworld to the actual world. It is evident, though, what suffices for the validity of conditional proof on Stalnakers semantics: it is enough that any member of X is necessarily true if true at all. It is a plausible metaphysical thesis that the statements of pure mathematics have this property. So Stalnakers semantics combines with that thesis to explain why conditional proof is valid within pure mathematics, even though it is not valid in general. In fact, the explanation extends naturally to cover applied mathematics too. It would be implausible to claim that Newtons Laws of Motion are absolutely necessary if true at all. But in a context where we are deducing consequences from them, all the relevant possible worlds will be ones where they hold, so that the Laws have a relative necessity. In this connection, it is noteworthy that the soundness of conditional proof does not require that A and B should be non-contingent. So, having deduced The apple is accelerating from the hypothesis The apple is falling freely under gravity and Newtons Laws, we can apply conditional proof to assert If the apple is falling freely under gravity, it is accelerating on the strength of the Laws aloneeven though there are Newtonian worlds at which the apple remains attached to the tree. Stalnakers theory, then, nicely explains why conditional proof applies within mathematics even though it is invalid in general. Is there a corresponding explanation if we assume that probabilistic validity sets the standard for assessing deductive arguments involving conditionals? Because conditional proof yields the inference from !A" to ! A % B", it clearly has some probabilistically invalid instances. In fact, the rule is probabilistically sound only when b(X) = 1, a condition which is also sufficient for soundness. In other words, Adamss account validates the full classical logic of the conditional (including the rule of conditional proof) in, and only in, the limiting case where the thinker is subjectively certain of his premisses. I think this is a serious problem for Adamss account of argumentative validity. As we have seen, mathematicians freely apply conditional proof. On Adamss account, however, they are entitled to do that only if they are subjectively certain of their axioms, and of any theorems used as premisses in conditional proofs. It is quite clear that this condition is not satisfied. Mathematics is distinguished from ordinary empirical science epistemologically: one comes to know mathematical truths by proof rather than by perception and induction. Mathematics may also be distinguished from the empirical sciences metaphysically: perhaps its truths are necessary truths. But it is not distinguished from other

16

disciplines by virtue of its practitioners being subjectively certain of its results. To the contrary. Practitioners of any experience know that many mathematical books and journal articles contain mistakes. Indeed, some mathematicians rationally refrain from assigning degree of belief one to (the conjunction of) the Peano axioms: as we had occasion to note above, some future Russell might derive a contradiction from them. Again, few set theorists even pretend to be certain of some of the axioms of their theory. All the same, conditional proof is freely applied within set theory. Yet this poor explanation of why a stronger logic for conditionals is used within mathematics than without would appear to be the only one that Adamss theory of probabilistic validity permits. That theory is unabashedly individualistic and subjectivist, and if the use of any stronger logic can be explained or justified in its terms, it will have to be by reference to special subjective features of practitioners. It is hard to see what a successful explanation or justification could be.

8.

Conclusion

Where does this leave us? In 2-6, I challenged the argument that has persuaded many philosophers that conditionals are not propositions. Any theory of conditionals must say what it is for arguments involving them to be valid, and in 7 I presented a problem for the most developed account of validity that applies to conditionals on the assumption that they are not propositions. That account does not deal satisfactorily with the relationship between the logical behaviour of conditionals inside and outside mathematics. Perhaps some other non-alethic theory could do better on this score; so far as I know, though, none is on the table. Per contra, if conditionals are propositions, no special account of validity is needed: we may employ the standard account in terms of the preservation (or the necessary preservation) of truth. The upshot is clear: we can, and we should, seek a semantic theory that treats indicative conditionals as propositions. That is, we can and should seek to identify truth-conditions for indicative conditionals. Our discussion also shows that some popular accounts of those truth-conditions will not do: see again n.8 on Stalnaker. Indeed, I think that some new ideas for characterizing of conditionals truth-conditions are needed. Expounding those ideas is for another occasion but, as the Book of

17

Common Prayer almost said, they will be unable to grow until Old Adams is properly buried. That undertaking has been the task of this paper.

18

REFERENCES

Adams, E.W. 1975. The Logic of Conditionals. Dordrecht: Reidel.

Boolos, G.S. and R.C. Jeffrey. 1980. Computability and Logic, 2nd edition. Cambridge: Cambridge University Press.

Bradley, R. 2000. A preservation condition for conditionals. Analysis 60: 219-222. (( . 2007. A defence of the Ramsey Test. Mind 116: 1-21.

Edgington, D.M.D. 1986. Do conditionals have truth-conditions? Critica 18: 1-30. (( . 1995. On conditionals. Mind 104: 235-329.

Gibbard, A.F. 1981. Two recent theories of conditionals. In W.L. Harper, R.C. Stalnaker and C.T. Pearce, eds., Ifs (Dordrecht: Reidel), pp.211-247.

Jackson, F.C. 1979. On assertion and indicative conditionals. The Philosophical Review 88 : 565589.

Kahneman, D. 2011. Thinking, Fast and Slow. London: Allen Lane.

Lance, M. 1991. Probabilistic dependence among conditionals. The Philosophical Review 100: 269276.

Lewis, D.K. 1976. Probabilities of conditionals and conditional probabilities. The Philosophical Review 85: 297-315. (( . 1986a. Postscript to Probabilities of conditionals and conditional probabilities. In D.K. Lewis, Philosophical Papers vol. II (New York: Oxford University Press), pp.152-156. (( . 1986b. Probabilities of conditionals and conditional probabilities II. The Philosophical Review 95: 581-589.

19

McGee, V. 1985. A counterexample to modus ponens. The Journal of Philosophy 82: 462-471. (( . 1989. Conditional probabilities and compounds of conditionals. The Philosophical Review 98: 485-541.

Pendlebury, M. 1989. The projection strategy and the truth conditions of conditional statements. Mind 98 : 179-205.

Ramsey, F.P. 1929. General propositions and causality. Page references to the printing in Ramsey, ed. D.H. Mellor, Philosophical Papers (Cambridge: Cambridge University Press, 1990), pp.145-163.

Smiley, T.J. 1984. Hunter on conditionals. Proceedings of the Aristotelian Society 84: 241-249.

Stalnaker, R.C. 1968. A theory of conditionals. In N. Rescher, ed., Studies in Logical Theory (Oxford: Blackwell), pp.98-112.

20

S-ar putea să vă placă și