Sunteți pe pagina 1din 167

THE ART OF NECESSITY :

Deductivism, Modality, and the Limiting of Reason

Adrian Heathcote

Department of Traditional and Modern Philosophy The University of Sydney

Contents

1 Introduction

1

2 Abductive Inference and Invalidity

 

17

2.1 Imprimis .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

17

2.2 Invalidity

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

21

2.3 Implications .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

30

2.4 Inconsistency

 

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

35

2.5 Inference .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

42

2.6 Infix

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

46

3 Validity and Necessity

 

48

3.1 The Conditions of Validity

 

48

3.2 The Ambiguity of Truth Tables

 

54

3.3 Substitutional Semantics

 

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

60

3.4 First Order Models .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

64

3.5 Tarski and Logical Consequence

 

70

4 Models and Modalities

 

77

4.1 The Necessity of Set Theory

 

78

4.2 The Insufficiency of Set Theory

 

83

4.3 The First-Order Thesis .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

88

4.4 Skolem Relativity

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

99

4.5 Modal Logic and First-Order Logic

 

102

5 Circularity and Deductivism

 

106

5.1 Philosophical Preliminaries

 

106

5.2 The Nature of Circularity

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

112

5.3 Relevance and Conditionals

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

118

5.4 The Measure of Necessity .

 

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

124

5.5 Conclusion

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

133

 

ii

6

The Impoverishment of Empiricism

 

135

6.1 The Medieval Origins

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

135

6.2 Ockhamist Empiricism

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

139

6.3 The Generalised Euthyphro

 

145

6.4 Empiricism and Mathematics

 

152

6.5 Conclusion

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

162

iv

The Art of Necessity

Chapter 1

Introduction

hat do we need , in the way of inferential machinery, to prop- erly think about the world? How much is enough? A very com- mon answer to this question is: not much; first-order logic, the staple of logic courses throughout the world, should be suffi- cient. But is that right? Can all rational enquiry be reduced to that handful of rules? I argue in this book that it cannot. Let me coin the term inferential essentialism to describe the view that some set of inferences I is sufficient to reason about the world. Obviously I may be very large or very small. This book is particularly directed at two common forms of inferential essentialism that set I as rather small. The first view I call, following David Stove, Deductivism . 1 It insists that only deductive infer- ence is proper, correct, reasoning; but it is more usefully characterised by what it rules out, which is principally probabilistic and abductive reasoning. The second view is more restrictive still: it could be given the ungainly name: first- orderism. It holds that only first-order inference is to count as inference proper. Typically one who holds this view will reject modal inferences, or higher-order inferences, along with, but not necessarily, probabilistic and abductive infer- ence. All philosophers are inferential essentialists of some sort or another— i.e. everyone has a view of how much we need in the way of inferential resources— though it may not always be explicit. Moreover the vast majority of philoso- phers, now and throughout history, have thought that I must be a rather small set. This is even reflected in the way that philosophers are taught—for we usually think that a course in first-order logic is sufficient training to deal

course in first-order logic is sufficient training to deal 1 David Stove ‘Hume, Probability, and Induction’

1 David Stove ‘Hume, Probability, and Induction’ in Philosophical Review, 1967, pp.160–

177.

1

2

The Art of Necessity

with philosophical problems, and most undergraduates would be exposed to no more. There may be philosophical reasons why such a narrow base set is thought to be attractive: philosophers have wanted both to propose theories as well as to criticise the theories of others. However, if we are to do this then we must be in possession of a comparatively simple set of inferential rules so that we can check any given inference against them. If the inference set were infi- nite, or even just very large, it would be more difficult to make such a check. And were this so we would not be able to see of every given inference whether it was valid or invalid—leading to a kind of inferential or critical paralysis. But to say that this view has been commonly held does not necessarily mean that it has been explicitly avowed—rather my point is that philosophy seems often to be conducted as if it were being held to be true. It is, as it were, part of the unconscious of philosophy. I want to argue in this book for a large, possibly infinite, set I. That is, I argue that narrow inferential essentialism has been responsible for a large number of philosophical problems, of a rather disparate character. Freeing philosophy from this assumption will produce a more realistic methodology, as well as giving us greater leverage on philosophical problems. But that is for the remainder of the book—in this chapter I want to describe the historical background in more detail, to set the scene for what follows.

*

Throughout its history philosophy has been largely Deductivist. This estimate of the ubiquitousness of Deductivism may seem to contain an exaggeration, but if it does it is only, I believe, a very slight one. There have indeed been many philosophers who have argued for the importance of inductive inference, or probabilistic inference in general—and, indeed, much of the best work of the last sixty years has been due to them. (The names of Ramsey, Carnap, Richard Jeffrey, Henry Kyberg, Brian Skyrms, and Patrick Suppes spring im- mediately to mind.) Likewise one of the greatest philosophers of the Seven- teenth Century, Leibniz, had an advanced and prescient understanding of the importance of probabilistic inference. And in the same century we have the invaluable work of Arnauld and Nicole, in The Port-Royal Logic. But as philo- sophically important as these contributions have been they clearly do not con- stitute the dominant tradition. From St Augustine through to the end of the Middle Ages, with the Ockhamists, and on into the early-modern philosophy of Descartes, Spinoza, Locke, Berkeley, Hume, Kant, the entire Continental Idealist tradition to the present day, the Nineteenth Century British Ideal- ist tradition, Russell, Moore, Wittgenstein, Ayer, the Oxford Ordinary Lan- guage philosophers, Popper, Feyerabend, Quine, Rorty and many more—all

Introduction

3

have explicitly eschewed probabilistic inference, or else simply ignored it com- pletely. If one looks at the main themes that have dominated philosophy, the Deductivist core is readily apparent—proponents of non-deductive modes of inference, like Arnauld, Leibniz, Mill, and Keynes have been minor dissenting voices largely unheard in the din of Deductivist Orthodoxy. The philosopher who made the difference was, I suggest, David Hume. Prior to the middle of the Seventeenth Century the neglect of non-deductive inference was perfectly natural since almost nothing was known; there was nothing that constituted even the beginnings of a formal theory of probabilis- tic inference. By the end of the Seventeenth Century, however, thanks to the efforts of Pascal, Fermat, Huygens, Leibniz, Jacob Bernoulli and others much was known and the beginnings of a formal theory were present. Had philoso- phy at that point continued in the direction that Leibniz was indicating—in his epistemology at least—our tradition might have been very different to what it was. Two significant movements intervened, however: the Idealist response to Cartesian Doubt, represented by Berkeley and Hume (of the Treatise , at least); and Hume’s inductive scepticism. The first replaced an external world about which evidence could be assembled and competing claims weighed, with an internal world that could be known directly and infallibly, and in which prob- ability had no place; and the second cast doubt on the objective rationality of inductive expectations. After Hume the employment of probabilistic rea- soning by philosophers dwindles away to a trickle (with Mill being the great exception); Kant, Hume’s great heir, rarely mentions probability and never em- ploys it. Probability survived in the mathematicians and scientists of the Eigh- teenth and Nineteenth Century—Poisson, Laplace, de Moivre, Markov, Gauss, Maxwell, Boltzmann, and others—but disappears in the, predominantly Ide- alist, philosophy of the time. When interest was revived in the 1930s and 40s it came as an intrusion from the outside, as a European importation from mathematics that derived ultimately from the Moscow School of Markov, Khinchin and Kolmogorov. But if the rejection of probabilistic reasoning in philosophy from the middle of the Eighteenth Century onwards was largely due to the influence of Hume, there is still the question of how Deductivism originated. The simple answer, canvassed above, that before the Seventeenth Century nothing was known, begs the obvious question of why not? How was it that, with an interest in games of chance going back at least to Antiquity, and the use of evidential en- quiry being much older still, a theory of probability and chance took so long to develop? It is at least possible that the focus on deductive inference inhib- ited the mathematisation of the subject, for that only emerged with Cardano in the Renaissance, when Scholasticism had begun to loosen its hold on the

4

The Art of Necessity

scientific mind. However, this idea that, on one side of the Seventeenth Cen- tury it was Scholasticism that stifled probability theory, whereas on the other it was Hume, though undoubtedly correct in outline, presents too discontinu- ous a picture of the history of philosophy. In fact, I maintain, it was the very same impulse at work on either side of the interregnum represented by the Sci- entific Revolution, and it succeeded in both instances in causing a significant divergence of science from philosophy—making for the ‘two cultures’ of C.P. Snow. One can trace this original impulse back to William of Ockham. Ockham represented a radical, and purifying, change in the development of philosophy— he was the first philosopher to be, in Reinhold Seeberg’s phrase, ‘fanatical about logic.’ Certainly Ockham was the beneficiary of the logical researches of the Twelfth and Thirteenth Centuries, but he possessed a synoptic view that went well beyond that of his predecessors, and a quite unique understanding of the modal concepts involved in logical inference itself. In his Summa Totius Logicæ we find a schematic breakdown of the forms of admissible inference. For example, the eighth-eleventh propositions say:

From something necessary something contingent does not follow;

Something impossible does not follow from something possible;

From an impossibility anything follows;

What is necessary follows from everything.

The significance of these schemata lies in the way they partition state- ments into distinctive categories, and immediately suggest differing epistemic r oles.ˆ For if one combines these statements with a general empiricism, derived from Aristotle and prominent in Aquinas, that all knowledge derives from sensory experience, and the further claim that experience delivers only contin- gent truths, then certain things follow. The propositions given above tell us that necessary truths only imply necessary truths, that they are, in a phrase, inferentially closed ; but this means that a knowledge of what is necessarily true—for example, that God exists—can reveal nothing about the world that we find around us. Thus we can’t check whether a necessity is really true by making any observations on the natural world. And just as the inferentially closed realm of the necessarily true does not reach down to the contingent, so also God is an unknowable being, remote and removed from what occurs here on Earth. The two realms of modal propositions, the necessary and contingent, are reflected in the two realms of Being: the divine and the secular. Ockham

Introduction

5

staunchly maintained this separation and held that the divine was entirely a matter of faith. It is apparent that Ockham’s characterisation of the various kinds of in- ference represents not just a development of modal notions within logic but the first real attempt at a meta -logic, a logic of logic. For with Ockham’s list of propositions it becomes possible to make judgments about the valid- ity of whole groups of inferences simply by examining the character of their premises and conclusions. It is here, I think, that Ockham’s startling original- ity lies, and it seems to have been missed by the traditional commentators on Ockham. With this powerful set of modal principles governing the nature of logical entailment we have the beginnings of Deductivism. All of Ockham’s philosophy was a matter of uncompromisingly drawing out the consequences of this relatively small set of propositions about the na- ture of logic and experience. His nominalism , for example, was really little more than an insistence that we are acquainted in our experience only with particulars: we perceive no essences (which would be necessities) and it will accomplish nothing to place such essences as concepts in the mind of God. It does not appear to have been Ockham’s intention to deny that there is an ob- jective difference between, say, a hawk and a hand-saw, merely that whatever makes that difference cannot be an independent entity. In a sense, perhaps, we might see him as more resembling a modern Trope theorist than a mod- ern nominalist. Likewise Ockham’s stand on causal necessities can be seen to be a consequence of his views on what experience can reveal. He held that there was no necessity making one thing happen as a result of something else happening; all there is to causal situations is regular succession—a purely con- tingent relation—for necessities are not, by their nature, observable. This goes along with the Two Realms idea: the world of nature is a world of contingency, and the necessities are the special preserve of the Divine. It was here perhaps, more than anywhere else, that Ockham’s influence on his contemporaries was greatest, for his doctrine about natural necessities had profound theological consequences. To maintain that there were no causal ne- cessities allowed God complete freedom to act in the world, and, indeed, Ock- ham and his followers later in the Fourteenth Century outbid one another in what they claimed God could do. For God was not bound by covenants, or our earthly conceptions of right and wrong, or our sense of what should reward the pious Christian life: God was free to love those who hated him and hate those who loved him; He is absolutely unconstrained in what He may will, and noth- ing can compel a particular response from Him, thus grace is only ever freely given, never earned: He is ultimately ineffable and unknowable. One can see in this the beginnings of Luther’s protest against the Church, and also perhaps

6

The Art of Necessity

of the mystical irrationalism of the later neo-Platonists. If Aquinas is the great synthesiser, trying to make all of our knowledge, religious and secular, of a piece, then Ockham is the great decoupler—of the divine from the secular, but also of the parts of the natural world from each other. In Ockhamist meta- physics we have a punctiform knowledge of particulars but there is nothing that ‘cements’ any of these particulars together to make a functioning whole:

the world is all topologically disconnected. A measure of Ockham’s enormous influence can be gleaned from the com- plaint of the Chancellor of Florence, the Humanist and Bibliophile, Coluccio Salutati, at the end of the Fourteenth Century, a complaint addressed to his countrymen who

fly the heights of logic and philosophy without understanding or even reading the texts of Aristotle, for they search out among ‘the Britons at the end of the world’ this or that treatise, as if our Italy was not sufficient for their erudition. These works they pore over without books and the writers of good philosophy to help them, and they learn from them dialectic and physics and whatsoever soaring (transcendens) speculation deals with. 2

Undoubtedly Salutati intended to refer to philosophers beyond the Ockhamist group, philosophers like Thomas Bradwardine, Richard Suiseth and William of Hentisbery—but these were of comparatively minor influence as against the Oxford Ockhamists. But whatever influence Ockham had in shaping the Humanism of the Italian City States is itself insignificant in comparison with his influence closer to home. Certainly it is striking that we find Ockham’s argument on causal necessi- ties repeated, with little modification, four centuries later in Berkeley. In the Principles he notes that

As to the opinion that there are no corporeal causes, this has been heretofore maintained by some of the Schoolmen, as it is of late by others among the modern philosophers; who though they allow matter to exist, yet will have God alone to be the efficient cause of all things. These men saw that among all the objects of sense there was none which had any power or activity included in it; and that by consequence this was likewise true of whatever bodies they supposed to exist without the mind, like unto the immediate

2 See The Fifteenth Century, 1399–1485 by E.F. Jacob in The Oxford History of England series, Oxford: Oxford University Press, 1961. pp. 676–7.

Introduction

7

objects of sense. But then, that they should suppose an innumer- able multitude of created beings, which they acknowledge are not capable of producing any one effect in nature, and which therefore are made to no manner or purpose, since God might have done ev- erything as well without them—this I say, though we should allow it possible, must yet be a very unaccountable and extravagant sup- position. ( Principles of Human Knowledge, sect. 53)

Thus, according to Berkeley, natural causation does not exist because noth- ing natural exists—it is all Spirit. Why try to mix an intensional causation with a physical ontology? Why not make it all intensional? Berkeley objects to the extravagant hypothesis of the Schoolmen only because he prefers the even more extravagant hypothesis that is his own. By comparison, Hume’s views on causation seem to be a return to a pure Ockhamist position. He believes that the causal necessities we think are in the world are really a projection of our inductive expectations based on past experi- ence and he goes on to insinuate that there is something logically unintelligible in supposing them to exist in the world. The basis of this unintelligibility ap- pears to lie in an Ockhamist insistence that we can observe only particulars (bundles of Hume’s sensible qualities) and that these particulars are all contin- gent entities—after all, what would it be like to observe in a single instance a necessity, something that could not have been otherwise? How would we ob- serve that could not ? Hume finds it absurd that a sensible quality, or even the repetition of a sensible quality, could disclose this modal nature. Thus Hume believes that our belief in causal necessities is an illusion fostered by our habit of reading into the world things that have only a subjective existence: in short, causal necessities are secondary qualities. On the question—surely, now, the pressing question—of how change occurs, of what makes one thing happen seemingly as a result of something else happening, Hume is either a strong sceptic (there is nothing there because there can be nothing there) or a weak sceptic (there may be something there but we can’t know about it) depending on which passages one chooses to emphasise. 3 Unlike the Scholastics Hume

3 I am aware that I am here jumping into a lively contemporary debate on the correct in- terpretation of Hume, the debate on whether Hume is a Realist or not. The terms of this de- bate have often seemed a little mysterious to the present author, so let me just say here that I think that those who think Hume is a Realist (under the skin, or whatever) have been reading Hume—encouraged one has to say by his own inadvertent misdirection—too much in the light of the contemporary scientific (i.e. Newtonian) philosophy. Such a reading flies in the face of his manifest ignoring of contemporary science (except in one solitary footnote) and is forced to mangle his quite explicit and clear arguments to the contrary. My own view is that Hume is better seen as reviving a Medieval Ockhamism in the light of contemporary Lockean empiri-

8

The Art of Necessity

does not have available to him a God who can fill-in the gaps and provide the active force so he is left only with human psychology. This is, then, forced to do a great deal of the explanatory work that would normally be left to natural science. But if Hume’s metaphysics could be described as “Ockham minus God” his epistemology is simply pure Ockham (for the simple reason that Ockham’s epistemology is already minus God). His denial of causal necessities decouples events one from the other so that all that is left is regular succession and this, in turn, has implications for the ‘foundations of induction.’ Neither pure ( Demon- strative) reasoning nor deducing the logical consequences from our past expe- rience can lead us to believe that the future will resemble the past, since it is always conceivable, or possible, that ‘the course of nature may change,’ that snow may, in the future, burn rather than feel cold. But how does Hume know that it is possible that the course of nature may change? Only because he an- tecedently believes that there is no necessary connection between touching snow and it feeling cold—for if there were it would not be possible for it to be otherwise. 4 However, once Hume draws the conclusion that it is possible that the course of nature may change he is immediately inclined to believe that ‘inductive inference is without foundation’ and therefore, itself, merely a subjective matter. Thus Hume’s philosophy is of a piece: necessary connec- tions are projections of our inductive expectations and the latter are merely the workings of custom and habit. In this way Hume appears to have shown that there can be no inductive inference—that deductive inference is the only legitimate form of inference, the only one that ‘has a foundation.’ He has, in other words, proven inductive scepticism. The attentive reader of Hume may, however, wonder whether he has done anything more than simply assume inductive scepticism—or, what amounts to the same thing, assume Deductivism. His argument merely shows that our in- ductive expectations cannot have a deductive foundation, it is entirely neutral on the question of whether it has any other sort of foundation, or whether it might not simply be a different species of inference altogether. 5 Indeed if one

cism. Thus he is a late Medieval rather than an early Modern. (But one has to say that the misunderstandings of his philosophy were much abetted by his infuriating habit of ignoring all other philosophers—as though he were philosophising de novo .) 4 I cannot see that Hume has any non-question begging reason to believe that it is not a necessary truth that snow, tomorrow, will feel cold to the touch—particularly if we allow meta- physical necessities. And his stated reason is manifestly weak and, even, irrelevant. (There is also a confusion that runs through Hume’s entire discussion of necessary connections between types of events and between tokens. But I will not try to unravel it here.) 5 Of course if one could show that our inductive expectations were unjustified even as a species of probabilistic inference then one would have an interesting argument for inductive

Introduction

9

examines Hume’s argument one can see that it is another example of an infer- ential closure argument: descriptions of our actual experience are deductively closed, in the sense that no statements about what we might, in the future, experience, follow from them. It will help to have the argument before us in more detail. Let F be a state- ment about the future (say, that tomorrow snow will feel cold to the touch, or that bread will nourish); let N be a set of necessary statements, as complete as our knowledge allows—that constitutes the basis for what Hume calls demon- strative reasoning ; let P constitute a set of statements that summarises our past experience. Now, form the deductive closure of N—denoted Cl( N)—which is to consist of the set of logical consequences of N, and likewise the deductive closure of P—denoted Cl ( P). Hume’s inductive scepticism is, very simply, the claim that F / Cl(N) Cl(P). In other words all statements about the future na- tures of objects are outside the deductive closure of what we might genuinely know by reason—which is delimited by Cl( N) and Cl(P). (Note that if we were to consider the probabilistic closure of the set P— which consists of all the statements rendered probable by P, and which we will denote Pr( P)—then the argument no longer obviously goes through—indeed seems pretty obviously false, in the light of the success of predictive statistical methods.) In fact, once we see Hume’s argument in this light it becomes clear that many of his distinctive arguments are of the same kind—in particular his is- ought gap argument is another inferential closure argument. (All ought state- ments are outside the inferential closure of all is-statements.) So also is his Idealist argument (in the Treatise) for our having no knowledge (based on Rea- son) of an external world. (All statements about the external world are outside the inferential closure of the set of sense-data statements.) These inferential closure arguments come, I maintain, originally from Ockham, and have their basis in the logical and theological currents of the Fourteenth Century. It is for this reason that I think Humeanism should be regarded as a phase of a remon- tant Ockhamism. And if this makes Hume seem less original than we might have thought, it should, at least restore to Ockham some of the recognition that has been denied him. 6 There is, however, a problem with Ockhamism, whether in the original or

scepticism. But if not, not. 6 It is ironic that Ockham is remembered mainly for his razor (Frustra fit per plura quod potest fieri per pauciora, or as it is commonly rendered, ‘don’t multiply entities beyond neces- sity’) since this is not at all original to him. The principle is known to have been adduced by Peter of Spain, later Pope John xxi, over seventy years earlier, and is likely to be much older than that.

10

The Art of Necessity

in Hume’s formulation, that does not appear to have been noticed until long after the view had become accepted, and even taken for granted. The problem is that in espousing the empiricism that was current at the time, Ockham did not have available to him the resources necessary to provide an account of our knowledge of logical and mathematical truths. On his own view such truths were necessary truths, as were valid inferences (usually seen as necessary con- ditionals, or consequentiæ). Yet, also on his own account, observations yielded only contingent statements to serve as premises, or antecedents; how then do we know the very logical truths which serve as the machinery for ensuring inferential closure? How did these necessary truths manage to get through the finely sifting empiricist filter? For Ockham these statements were ultimately grounded either in faith and revelation, or in meaning. By Hume’s time it had become customary to try to solve this problem by thinking of these logical truths as true solely in virtue of their meanings, and this has become the standard Empiricist view ever since. 7 The problem is that this will involve us in a Euthyphro dilemma: are logical statements necessary because they are analytic; or is the language constrained by an antecedent set of necessary truths that place limitations on what sen- tences may mean—in other words, are they apparently analytic only because of some underlying necessity? (For example, does the law of non-contradiction seem true because of the arbitrary meaning assigned to and and not , or is the meaning constrained to reflect some antecedent necessary logical truths?) If we try to explain the necessity of necessary truths as arising out of analytici- ties then we will be left with nothing to explain how these analyticities arise. Language will be entirely contingent—and so, consequently, will the neces- sary truths! (The analogy with the standard Euthyphro is quite apparent here:

if goodness arises from the will of the gods, and that will is not a matter of conforming to an antecedent goodness—if it is not so constrained—then the gods could have willed what we now regard as bad to be good.) It took Quine, as the great successor to Hume and the Empiricist tradition, to embrace this horn of the dilemma with the needed radicalism; for only Quine seemed to be aware that even logical necessities had to be purged from a properly con- stituted, and self-consistent (itself a modal notion!) Empiricism. Thus, Quine embraced the idea that necessities arise only out of analyticities—and then embraced the consequent idea that there are no genuine analyticities: it is all

7 Though this view of Hume is very much the standard interpretation there has been at least one voice raised in dissent (see W. A. Suchting’s ‘Hume and Necessary Truth’ Dialogue , v, (1966), pp. 47–60. Suchting argues that Hume should be seen as having a psychological theory of necessary truth rather than a linguistic one. The following Euthyphro dilemma will apply mutatis mutandis to a psychological theory as well, however.

Introduction

11

contingent, and may change as our account of the world changes. (Just as, with the standard Euthyphro, Ockham had earlier argued that goodness arises only out of God’s Will, and that since that Will is completely unconstrained, it may change at any moment, making the bad suddenly good .) But as audacious as Quine’s response to the problem was, and is, it is also on examination full of difficulties and scarcely believable. 8 By following Ockhamist Empiricism to its logical conclusion Quine showed, with great effectiveness, that the view is unworkable. For this elimination of logical necessities to work it would be necessary for there to be a systematic presentation of first-order logical truth and validity in terms that did not involve necessity but that instead made use of the free substitution of linguistic items. This is how Tarski’s account of logical truth and consequence assumed its importance, whether he intended it to be used this way or not: it offered a completion of Ockhamist Empiricism that would be satisfactory on its own (i.e. Quinean) terms. I will argue in chapters three and four that this will not work and that, in effect, the above Euthyphro can- not be solved this way: we need modal notions if we are to understand logical truth—or logical truth will itself become undefinable. This touches on another matter. As I suggested earlier, Deductivism is likely to go along with the view that logical inference is, somehow, psycho- logically tractable. This, in turn, is likely to accompany the notion that logic is not continuous with the surrounding mathematics, that it is not, for example, hostage to the existence of suspect abstract entities, like numbers, or vectors, or sets. For if it were hostage to such entities awkward epistemological questions could be raised concerning our access to logical truth. In a sense our knowledge of logical truth would become conditional on our knowledge of the existence of such entities. Such a situation would put in jeopardy the entire notion, cen- tral to Deductivism, that logical inference is our prior, and exclusive, means of deciding the cogency of metaphysical claims; that it is not only the final ar- biter, but the sole one. This idea, that our logic of choice is the sole logic that there is; that our logic is not metaphysically hostage to the existence of abstract entities; that it is not about anything particular, but about everything; that it is, in a phrase, topic-neutral—this idea, or bundle of ideas, is, when applied

8 Of course, some might say that there is much more to Quine’s position than just the re- sponse to the above Euthyphro. One might emphasise his rejection of intensional notions, the failure of substitutability, the need for a purely extensional language for physics, etc. Yet much of this is really the Euthyphro in disguised form. What reason does Quine have for believing that science is, or will be, expressible in a purely extensional language? No reason that I can discern. (We may agree that it would be advantageous if it could be so expressed, but that is no reason to believe that it can be.)

12

The Art of Necessity

to first-order logic, called the first-order thesis. It goes with the ancient idea that logic is, so to speak, a skeleton, or frame, or common-core, over which the world is draped—the world consisting of contingent embellishments on a stem of necessity. Thus we find in Wittgenstein the view that logical truths are semantically empty—they are about nothing at all. And if logic stands apart, and above from, the world to which it applies, then it is a short step to imag- ining that philosophy does the same. As Wittgenstein put it in the Tractatus:

‘Philosophy is not one of the natural sciences. (The word ‘philosophy’ must mean something whose place is above or below the natural sciences, not be- side them.)’ (4.111). It is an idea that goes well with the classical notion that philosophy is a universal method—since, in some sense, it is about everything there is no room left for it to reflect on itself; there is nowhere to stand that is not just philosophy all over again—there is no place from which to exert an Archimedean leverage. I believe that these ideas of separateness and disconti- nuity are false and, ultimately, damaging; it is one of the main purposes of this book to try to show them so. Mention of ( Tractatus-era) Wittgenstein in this context is not accidental as he also is best seen, I suggest, as an Ockhamite Empiricist, and much of his philosophy recapitulates the tradition from Scotus through to Hume. For ex- ample, Wittgenstein’s theory of truth is essentially that of Duns Scotus and therefore Ockham, his modal logic and even his account of the mystical is Ockhamist, and his inductive scepticism, his denial of necessary connections, his demarcation of the meaningful from the meaningless is likewise fundamen- tally Ockhamist. Where Wittgenstein differs from Hume and much of the later post-Lockean Empiricist tradition is that he is explicitly a Realist. But the same Deductivist thread that runs through the Ockhamist-Humean tradition is also present in Wittgenstein; indeed I think Wittgenstein is the philosopher who best represents, within the Ockhamist Empiricist tradition, a purely Ock- hamist logic and metaphysics, freed from the phenomenalism and Idealism of the post-Cartesian Empiricists. By placing Wittgenstein within this line of descent I want to break with the standard view which attempts to present him—as perhaps he wished to present himself—as a solitary figure working outside any tradition. The points of similarity, even between his philosophy and Hume’s, are simply too great to ignore. 9 I have already suggested that the Ockhamist Empiricist tradition rather in- evitably produces a particular metaphysical picture, one that we find in Ock-

9 I am referring here only to Tractatus-era Wittgenstein—not to the Idealist Wittgenstein of the Investigations . It is an interesting question as to how Wittgenstein came by this influence. I don’t think it can be ruled out that, by this time, these ideas had so saturated European thought that they were simply ‘in the air’.

Introduction

13

ham himself, in Hume and the Humean tradition, and in Wittgenstein. By making everything in the natural world contingent it excluded natural neces- sities, producing a causally disconnected, discrete, world-view. In Ockham’s time there was a sound theological reason for believing this: the natural world was the created world, and thus its existence was contingent upon God’s de- cision to create it; had there been any necessities in the world they would have placed a limitation on God’s Will. As we have seen, the completely un- trammeled nature of God’s freedom was Ockham’s ruling principle. It was, thus, completely in keeping with his fundamental beliefs that Ockham saw the world as disconnected and, in a sense, incomplete—God was a necessary be- ing who united the contingent natural world, who knitted it into a whole. But by the time we come to Hume such a metaphysics makes no real intellectual sense—and seemed to his contemporaries to fly in the face of well-established science. In Hume such disconnectedness goes under the name Humean Dis- tinctness, and sometimes Humean Supervenience. It says that the world con- sists of ‘loose and separate’ facts and that everything supervenes on such facts. It is a view that has been endorsed by many modern philosophers under the description, and cover, of Hume’s Regularity Theory of Causation. The thesis of Humean Distinctness is a metaphysical doctrine that is in- timately connected with the view that is often called Logical Atomism. If, at bottom, the world consists of simple facts, or states of affairs, that stand as truth-makers to simple propositions, then, if there were any necessary con- nections between these facts, they would not be properly atomic, but rather sub-components to the genuinely atomic state of affairs, of which they are ab- stracted parts. This is because logical atomism is intimately linked to the idea that atoms should be free to enter into larger and larger molecules in an unre- stricted manner. Thus, atoms are essential because they are the building blocks of a combinatorial theory of possibility: the actual world is one large molecule, another possible world is a rearrangement to give another molecule. This is the view that we find behind the Tractatus but it is also the underlying meta- physics in Hume, where it could be called Phenomenal Atomism: there is only punctiform sense experience and there can be no connections between distinct sense experience that is not also, contrary to either common understanding or our phenomenal limitations, simply another sense experience. Thus Humean Distinctness implies Atomism. I do not argue the point in what follows, but my view is that this Atomism, and the Distinctness Thesis that goes with it, is fairly decisively controverted by Quantum Mechanics. Indeed it is this that is usually meant when it is said that quantum systems can exhibit non-local behaviour. If one has a pair of particles, prepared so as to be anti-correlated, then the pure state of the com-

14

The Art of Necessity

posite system does not supervene on the mixed states of the components. 10 This non-supervenience is as yet poorly understood but it suggests that there is a state-space connection between the two particles, that belies their physi- cal, spatial, separation. This is likely to be a fairly wide-spread phenomenon, so widespread that there seems little hope of recovering a physically meaning- ful logical atomism. Thus Ockhamist Empiricism has fostered a metaphysics that is empirically false. It is time to abandon it. My claim in this book, then, is that Deductivism and the, historically, closely associated views of Ockhamism and the first-order thesis, are false and have led us into some pervasive errors. Firstly, in trying to make deductive reason- ing do all the inferential work, it has given us an unrealistically narrow base for our epistemology, a narrowness that is all too evident in the most influen- tial epistemologists: Descartes, Hume and Kant. Secondly, I maintain that in depriving us of an adequate account of necessity it has undercut itself, by al- lowing no account of logical truth that is both adequate and consistent with its principles. Thirdly, the official reliance on a manifestly inadequate system of deductive reasoning has led to, what one might call, a black-market in informal logic: philosophy has had to rely on a set of informal proscriptions against, say, circular reasoning, or infinite regress, where the justification for such proscriptions has been manifestly unclear. Often, in fact, we haven’t even known what such charges amount to. There is, for example, a vast and incon- clusive literature on the topic of circularity that often leaves it unclear whether any deductive inference at all could be free from the fault. But there is an even more egregious problem. Although we know that certain oft-used formal logi- cal systems are sound there has never, to my knowledge, been an argument to suggest that the result of adding these informal proscriptions to that corpus of deductive inferences will result in anything that is consistent, sound, or even useful. For all that we know we might be working with a set of rules and prin- ciples that could never be guaranteed to deliver philosophical truths, and that might, indeed, be systematically producing nonsense. I believe that such worries, far from just being idle possibilities, must be

10 An account of this matter can be found in, for example, R.I.G. Hughes The Structure and Interpretation of Quantum Mechanics (Cambridge: Harvard University Press),

1989.

See also my Critical Notice of the above in the Australasian Journal Of Philosophy, June,

1994.

I should say that there seems no reason at present to believe that, what one could call,

object atomism is false: matter does seem to decompose into particulate ‘atoms’, namely ele- mentary particles. The atomism that fails in Quantum Theory might be described as property , or attribute , atomism (or, as it is sometimes referred to in Quantum Mechanics, factorisability ). Of course, I believe that causal connectedness also shows that the Humean Distinctness Thesis is false, but since it has been much contended against I do not wish to rest too much weight on it here.

Introduction

15

taken seriously. In everyday philosophical practice it is all too easy to for- get that philosophical methodology has accumulated to its present state from purely contingent historical causes. We are the heirs of argument strategies that originated 2500 years ago in a very different place and at a very differ- ent level of logical and scientific knowledge. The Athenian Greeks may even have had very different purposes to those we now have. (Do we still want our philosophy to produce wisdom in their sense? Is it still profitable to try to expose all claims to knowledge as impudent, or hubristic, as Socrates at- tempted to do? Should we still think of the Socratic Dialogue as the proper model for philosophising?) As philosophy has developed to the present day it

is characterised by fractures that run partly along the lines of this ancient in-

debtedness. For example, there are those who believe that all philosophical problems can be traced back in some form to Plato—and even that they may find their best expression there—while others think that the distant past has

little, or nothing, to teach them. 11 This is not a debate that I intend to enter into here (or elsewhere, if it comes to that). My concern is with the soundness of the methodology that we now possess. I will argue in chapter five that De- ductivism is responsible for some of the confusions about the nature of circular reasoning, taking a dialectical fault and trying to understand it using only the resources of deductive reasoning. I believe that this also resolves some other long-standing puzzles about the nature of logic. Thus, at least in part, this book has as its concern—and I am aware that the term will probably have

a number of disagreeable associations for at least some readers—a claim in

meta philosophy. I argue that philosophy itself has gone awry from its having

a fundamentally flawed methodology, or at least from the absence of a more

complete set of methodological tools. In one sense, then, this book is about the history of an idea, but it is also a straightforward discussion of certain standard topics in the philosophy of logic: necessity, validity, implication, and inference. And just as mathematics has benefitted enormously from the development of

meta -mathematics, so too can philosophy benefit from the close scrutiny of its argument strategies, and its intimate relationship with formal logic. In short, from a metaphilosophy.

*

11 A disagreement that is often reflected in disputes about the proper structuring of the teach- ing syllabus. For a spirited defence of the opposing view see Martha Nussbaum’s Cultivating Humanity . Of course the above remarks are not intended to disparage the Ancients, merely to point out that the gulf that separates us from them is the result of our having two and a half millenia to digest their arguments. We can thus neither go ‘back to the Greeks’ nor pretend that what we have learned since then does not affect our understanding of their limitations.

16

T HE

A RT OF N ECESSITY

It will be worth stating at the outset what I am not going to attempt in this book—to allay any false hopes. Firstly, I am not going to defend, or extend, probabilistic reasoning. That has been done ably by others. Secondly, I am not going to develop, or extend, non first-order deductive logics—that project ought to be driven by immediate practical concerns about the need for some- thing beyond existing inferential methods. And again, it has been done ably by others. No, my concern here is to examine the detrimental effect that a par- ticular, very widely pervasive, doctrine has had, on matters that are of imme- diate concern to all working philosophers. Matters concerning inference, the nature of logical truth, the method of appraising arguments, and a great deal else besides. But I hope that the historical thread runs strongly through the technicalities—making it clear that there are broader explanatory issues that at stake. The plan of the book is as follows: chapter two looks at one immediate consequence for philosophical methodology of the exclusive reliance on deduc- tive inference. It argues that abductive inference is essential. Chapters 3 & 4 discuss the attempt to eliminate necessities from logic—an attempt that, if successful, would effectively complete the Ockhamist program, by turning its own scepticism against its inferential base. I argue that it has substantial meta- physical costs that make it unappealing. In chapter 5 I turn to other aspects of inference that have been rendered obscure and problematic by Deductivism. Chapter 6 closes with a discussion of how to reform Empiricism and the phi- losophy of mathematics without the distorting influence of Deductivism and the Ockhamist tradition. Thus it is my purpose to show that seemingly diverse problems have the same underlying root cause—and that this cause has been missed hitherto be- cause it runs so deeply through our philosophical tradition. My hope is that now, when we have untangled some of the knots that this philosophy has tied us in, we will finish with something that is methodologically clearer and richer—and also less likely to place us at ‘random from the truth’.

Chapter 2

Abductive Inference and Invalidity

bductive inference, or inference to the best explanation (in Gilbert Harman’s felicitous phrase), is useful precisely when there are no deductive grounds for accepting a given conclusion. In scientific contexts we take it to be rational to accept the best explanation that we can find for a phenomenon while acknowl- edging that the data do not deductively force the choice of any explanation. We take this to be rational because we think that a rational person should seek to optimise the truth of their beliefs about the world, and to accept the best explanation is surely to accept the theory that is the optimal choice. Abductive inferences, then, just like their near relations the inductive inferences, allow us to make choices between competing theories and explanations in circum- stances in which deductive certainty is unattainable. Put this way inference to the best explanation looks to be an inevitable component of our rag-bag of maxims governing rational action and explanation (a rag-bag that will include Ockham’s Razor and other principles). To many it may appear even more ex- alted; it may be seen as a virtual tautology: if we shouldn’t accept the best explanation, then what? the second-best explanation? the fifth?

then what? the second-best explanation? the fifth? 2.1 I MPRIMIS Yet although inference to the best

2.1 I MPRIMIS

Yet although inference to the best explanation looks at first blush to be a nec- essary part of our criteria for theory acceptance it is not unassailable. A critic might concede that we do use such inferences in our scientific reasoning and yet hold that it is not rational to do so. This has been claimed about inductive

17

18

The Art of Necessity

reasoning, from Hume through to the Popperians, so why not hold it about abductive inference as well? In particular someone might hold that abductive reasoning, like inductive reasoning, requires a foundation, a non-circular jus- tification. If none can be found then abductive inference is ‘without warrant’, as Hume put it when discussing induction. In the previous chapter I described a position which I called Deductivism.

It is the view that only deductive inference is rational, well-founded, or ‘war-

ranted’. For the Deductivist therefore, neither induction nor abduction pro- vide rational grounds for believing a conclusion. When deductive grounds are absent we have scepticism or, as sometimes with Hume, Nature herself must make up the difference by providing us with the requisite beliefs directly. But, as I also argued in the preceding chapter, Hume is far from alone in holding to this kind of logical purism. To mention again just one example, Karl Popper, as an avowed disciple of Hume, and the Popperians, as avowed disciples of

Popper, are the heirs to this position, for on their view only deductively certain conclusions warrant belief: induction, and by extension, abduction, do not pro- vide rational ground for accepting a conclusion. In fact on the Popperian view scientific methodology is cut down to just one deductive rule: modus tollens.

It is this lone inference rule that underwrites falsification, and falsification is

made to do all of the work on the Popperian scheme. If Deductivism has been the overwhelmingly dominant tradition in the his- tory of philosophy then, as suggested in the previous chapter, its usual context has been Ockhamist Empiricism. But one could easily be a Deductivist and not be an Empiricist—indeed Descartes in the Meditations is a particularly clear example of this. But we may conjecture that Rationalism did not have

a sufficiently attractive epistemology to recommend it to subsequent genera-

tions and to a great extent it represents the ‘road not taken’ in modern philos- ophy. For our purposes here, however, the main point is that Deductivism is

a broader doctrine than Ockhamist Empiricism. Sometimes in this book my

concern will be the wider doctrine and at others it will be with the narrower. Here it will be with the wider. One of the main features of deductive reasoning that has caused it to be ac- corded a special role is its formal character. Inferences can be seen to be valid on the basis of their formal patterns and these formal patterns can be studied

independently of the inferences themselves. The formality of deductive reason- ing is advantageous if one wishes one’s set of maxims for rational assessment to consist in a set of rules that can be stated independently of subject matter. This has been seen as important from the time of Aristotle, on through the Medieval period, and exercises a powerful hold on Twentieth Century philos- ophy. Undoubtedly, throughout the history of philosophy, it was the certainty

Abductive Inference and Invalidity

19

of deductive inference that led it to have the unique role that it had, but in the Twentieth Century, where fallibilism has largely replaced certainty-based epistemologies, it is the fact that deductive inference can be formalised that has kept it in central position. The suspicion that many philosophers feel for inductive and abductive inference probably reflects the difficulty (or impossi- bility!) that is thought to attend the formalisation of those inferences. If they cannot be formalised how can we be sure that they are ‘sound’, or reliable? Bas van Fraassen is an empiricist who stands as the most recent and articu- late representative of this ‘Formalist’ tradition. But like many modern Formal-

ists van Fraassen is a probabilist; that is, he thinks that rational belief change takes place by revisions of one’s subjective probabilities. 1 He differs from traditional formalists however in his attitude to the rules that one should adopt when changing one’s subjective probabilities. Whereas

a Bayesian will have a single rule, conditionalisation, which is applied un-

flinchingly to all new evidence, van Fraassen has only a single negative rule:

avoid probabilistic incoherence. Anything that this rule does not exclude is permitted. One might think that such an apparently liberal rule would mean that van Fraassen is in reality an anti- formalist. However, while it is true that this rule permits opportunistic belief revisions—‘I believe p because I can!’— much of the laissez-faire attitude is undercut by van Fraassen’s opposition to abductive inference. 2 And it is this opposition that is at issue here. So van Fraassen is opposed to abductive inference. Or, more precisely, he is opposed to the idea that there is a

rule of rationality that requires one to believe the best

or he is op-

posed to such a rule when it is interpreted probabilistically? Or ampliatively? (It is a little unclear exactly how strong the conclusion is intended to be.) But a defender of abductive reasoning could hold the line provided concessions are made elsewhere. Perhaps a strict Bayesian who uses simple conditionalisation could not accept abductive inference on pain of incoherence in his probability revisions (though it is not clear to me that this need be so) but someone who holds to the more liberal Jeffrey Conditionalisation—may abduction not be compatible there? Imperfect rational agents, such as we are, may yet be able

to find room for inference to the best explanation within the motley of rational

rules of thumb and imprecise probability assignments. Surely we know too

1 See Bas C. van Fraassen, Laws & Symmetry, (Oxford: Oxford University Press), 1989, chapters 6 and 7. 2 It is hard to avoid Formalism once one admits that beliefs are represented by subjective probabilities and that revisions should avoid incoherence. Incoherence is a result of accepting conflicting rules for belief revisions.

20

The Art of Necessity

little about fallible rationality to rule it out. 3 Yet I mention these matters only promptly to drop them. My intention is not to give a detailed account of the roleˆ that abductive inference plays in the instrumental rationality of fallible agents but simply to argue that it must play some role. Thus even though I have mentioned van Fraassen as a represen- tative of the kind of abductive scepticism that I wish to argue against I am not going to defend abduction against his specific charges (though the com- ments above should indicate how I hope such a defence might go). Instead I will present an argument that suggests that inference to the best explanation is an integral part of reasoning, and indeed so entwined with our use of deduc- tive reasoning that the latter would be paralysed without it. Hence in order to defend abduction I will focus not on its usefulness in scientific contexts— where Deductivists are all too happy to insist on impossible economies in methodology—but in those contexts where Deductivists find themselves most comfortable. I will endeavor to place abductive inference at the heart of the Deductivists’ own methodology—where they live, as it were. To do this I iso- late two elementary claims and argue that they are incompatible. Here are the two claims:

AI: Abductive inference is irrational. It is not reasonable to believe that the best explanation is the one most likely to be true, nor therefore is there any rule, no matter how vaguely specified, that licenses such inferences.

DR: Deductive refutation is rational. It is reasonable to try to show a posi- tion to be unsound by a determination of the invalidity of the arguments used. We are not warranted in believing the conclusions of invalid infer- ences; thus we are required to revise our beliefs on the determination of invalidity.

Of course DR does not say that the only method of refuting an argument is to convict it of invalidity; it is perfectly proper also to show that the argument rests on a false premise. However DR could hardly be denied. The method of reasoning, including reasoning within philosophy itself, consists in large mea- sure of the checking of inferences to ensure that they are valid and, if they are not valid, rejecting them. Even those idealised rational agents who condi- tionalise are presumed to update their beliefs according to logic. An invalid

3 The view that I’m gesturing toward is proffered by David Lewis in his “Probabilities of Conditionals and Conditional Probabilities II” (Conditionals, (ed) F. Jackson, Oxford, Oxford University Press, (1991), p.107). I have more to say about credence measures for non-ideal ra- tional agents in chapter 6.

Abductive Inference and Invalidity

21

inference, were it to occur, is presumed to be rationally correctable. Even an ideal agent must be Human enough to admit when he has made a mistake. The determination of an argument’s invalidity is made easy in the limiting case when the premises are known to be true and the conclusion false. How- ever in the usual dialectical circumstance the truth or falsity of the conclusion is precisely what is in dispute: the limiting case is not, therefore, of much practi- cal use. That is why we need to have recourse to some other method of refuting an argument and it is this that DR sanctions. Obviously, DR gives expression only to the negative side of Deductivism, the side which concentrates on refutation and the determination of invalidity. But one could be a Deductivist in a positive sense if one only ever asserted that which followed, and which could be shown to follow, from premises that one already held. This cleaving to the good rather than spurning the evil is rarely ever met with in actual Deductivists, however. From Aristotle to the present day, Deductivists have had a critical programme in which the views of others are to be shown-up as resting upon fallacious arguments. This critical emphasis is, one could conjecture, principally responsible for the way in which sceptical conclusions are the usual terminus for philosophical investigations from Hume onwards. Exposing how little reason can establish while wielding it with devastating effect against an opposing position is the great tension, one might almost say the contradiction, that has lain at the heart of much modern philosophy. DR gives expression to this negative side of Deductivism for it is its negative pretensions that I wish to expose. I will argue that AI and DR are in direct conflict with one another. Thus although many may have been tempted by the logical purism described above there simply is no coherent position to occupy: one cannot consistently use deductive reasoning in the way that argument requires and also forswear ab- ductive inference. Deductivism ends not in scepticism but in inconsistency.

2.2 I NVALIDITY

It is necessary now to rehearse an argument concerning the problem of deter- mining the invalidity of arguments. The argument is known to many philoso- phers, in the abstract at least, though it is often neglected in practice. The neglect is rather lamentable given the significance of the issues for the nature of the philosophical enterprise. Suppose we consider the following argument:

(1)

If it is sunny in California then it is sunny in Los Angeles It is sunny in Los Angeles

22

The Art of Necessity

It is sunny in California

We would normally adjudge this argument to be invalid and would point to the fact that it is a substitution instance of the invalid argument form asserting the consequent.

(2)

p q

q

p

Furthermore, ordinary truth tables provide a decision procedure for proving the invalidity of this argument form. The problem arises because the inference that underlies the reasoning just given is itself invalid. The argument in (1) is not invalid because it is a sub- stitution instance of the invalid form (2). That could not be the explanation because the invalid form (2) has valid substitution instances and therefore be- ing a substitution instance of (2) does not guarantee that an argument will be invalid. For example the argument in (3) is a valid substitution instance of the invalid argument form (2).

(3)

If it is raining in California then it is raining in California It is raining in California

It is raining in California.

The correct explanation for the invalidity of the argument (1) is then, not that it is a substitution instance of the argument form (2) but that there is no valid argument form of which it is a substitution instance. We can see this by com- paring (1) to (3). (3) is not only a substitution instance of the invalid form (2) it is also a substitution instance of the valid argument form (4).

(4)

p p

p

p

The point is that whereas some invalid argument forms have valid substitu- tion instances, all valid argument forms have only valid substitution instances. It follows, then, that if an argument is a substitution instance of some valid ar- gument form that it must be valid. The correct explanation for the invalidity

Abductive Inference and Invalidity

23

of (1) would then seem to be that it is a substitution instance of (2) and not a substitution instance of any valid argument form. 4 Nor is this phenomenon restricted to purely trivial examples, for the argu- ment (5) is valid

(5)

If either Gwyneth or Bill is here then they are both here Gwyneth and Bill are both here

Either Gwyneth or Bill is here.

This is despite the fact that this argument is plainly an instance of asserting the consequent —that is, it is a substitution instance of the invalid form (2). But no matter, because it is also a substitution instance of the valid argument form

(6)

(p q) (p & q) p & q

p q

Obviously it is possible to construct examples of arbitrarily large complexity— large enough so that we can no longer see at a glance whether the argument instantiates some valid argument form. The crucial point, however, is that it is quite wrong to infer that an ar- gument is invalid just because it is a substitution instance of some invalid argument form. Thus we cannot, in general, hope to show that an argument is invalid by citing some invalid argument form that it instantiates. We need to show more, namely that there is no valid argument form that it instantiates. 5 We may now seem to have found a prescription for determining an argu- ment’s invalidity. Unfortunately, an objection can be put even to this most

4 The significance of this point was first brought home to me by the paper ‘The Fallacy behind Fallacies’ by Gerald Massey in Midwest Studies in Philosophy, 1981, pp. 489–500. The issue was also noted, apparently independently, by David Stove in his book The Rationality of Induction, (Oxford: Oxford University Press), 1986. 5 For the propositional arguments that we are employing here we can make use of the fol-

lowing definition of substitution instance . Let p 1 , p 2 ,

propositional calculus with p 1 , p 2 are propositions and A 1 , A 2 ,

A n |= B is an argument then it is a substitution instance of

p 1 , p 2 A 2

then the form is said to be the specific form of the argument. The argument that we have given

above can now be stated thus: if p 1 , p 2 ,

tution instance will be a valid argument. However, if p 1 , p 2

form it does not necessarily follow that a substitution instance will be invalid.

p n |= q be an argument form of the

A n , B,

p n and q as propositional variables. If A 1 , A 2 ,

p n |= q if there is a function f that maps p 1 , p 2

p n and q to the propositions A 1 ,

A n , B. If f is a one-to-one function from propositional variables to simple propositions

p n |= q is a valid argument form then every substi-

p n |= q is an in valid argument

24

The Art of Necessity

modest proposal—indeed the accusation is that the above argument to the pre- scription for determining invalidity is itself invalid. The objection runs as fol- lows. It does not follow from the fact that the validity of (3) is to be explained by its being a substitution instance of the valid argument form (4) that every valid argument must be a substitution instance of some valid argument form— all valid argument forms may indeed have only valid substitution instances and yet not all valid arguments be substitution instances of valid argument forms. In other words, it does not follow that all validity must be formal in nature. We now accept that inductive inferences are not formal—this was the enduring lesson of Goodman’s non-projectible grue. Why not accept that not all deductive validity need be formal? David Stove has argued that we should indeed accept this possibility. Let me call deductive arguments whose validity is not formal in nature, if there are any, anomalous valid arguments. The possibility of there being anomalous valid arguments is relevant to the determination of the invalidity of an argument for the following reason: in order to establish that an argument

is invalid it is required (a) that one find an invalid argument form of which it is

a substitution instance, (b) that one determine that there be no valid argument

form of which it is a substitution instance, and then (c) that one determine that

it not be an anomalous valid argument.

The possibility of there being anomalous valid arguments clearly compli- cates an already complicated picture. Do we have an argument for there being no anomalous valid arguments, or, to put it the other way round, do we have a reason for believing that deductive validity is formal? I think we do—though

I can give only a very cursory sketch of it here. A valid argument is one for which it is impossible that the premises be true and the conclusion false. This can occur in three ways: (1) the premises are necessarily false, (2) the conclusion is necessarily true, or (3) the conclusion is somehow contained within the premises. It is the last way that is seized upon by Relevantists as at the heart of validity. Whether they are right about that or not it is clearly at the heart of the question as to whether validity is formal. This is because cases (1) and (2) are obviously formal: any impossibility will do for the premises in (1) just as any necessity will do in the conclusion for (2). Case (3) is more difficult to see however. The notion of the conclusion

being somehow embedded in the premises and being extracted by inferential rules is the reason for the impossibility of having the conclusion false when the premises are true: what is true in the premises could not become false just by being isolated by a rule and placed as the conclusion. But if this embedding is the underlying explanation of validity then it also suggests why deductive logic should be formal. If the conclusion is embedded in the premises in a

Abductive Inference and Invalidity

25

certain way then, surely, some other conclusion could be similarly embedded in a different premise set? Since embedding is a formal notion so is deductive validity. (If we wanted to give this a name we might call it Bolzano’s Thesis) I conclude then that there are no anomalous valid arguments. All valid arguments will fit into some (perhaps as yet undiscovered) formal system in which there are logical constants in addition to the logical variables. Returning to the question as to how invalidity may be established we find that ruling out anomalous valid arguments simplifies the account given above to the following two conditions. An argument is invalid iff (a) there is an in- valid argument form that it instantiates and (b) there is no valid argument form that it instantiates. Invalid arguments always satisfy condition (a) because every invalid (or valid) argument is a substitution instance of the invalid argument form

(7)

p

q

Determining that an argument is invalid is thus always a matter of determin- ing that it not be a substitution instance of some further valid argument form. 6 In general, then, determining invalidity is not a matter of finding some (i.e. any) invalid argument form that it instantiates—as, for example, when we at- tempt to indict an argument as invalid because it asserts the consequent (as in (1) above)—it is a matter of showing that there is no valid argument form that it instantiates. 7 Since condition (a) is always satisfied we can simplify the above account to give the following criterion of invalidity. An argument A is invalid iff there is no valid argument form of which A is a substitution instance. We can summarise our claims thus far in two general points:

(i) Every argument, valid or invalid, is a substitution instance of some in- valid argument form.

(ii) Every valid argument form has only valid substitution instances.

6 Strictly, of course, (5) is not quite the form for every argument, unless we agree to conjoin premises. But it is nevertheless true that there will be an invalid argument form for every argument, if we have a separate variable to mark each and every premise. It will thus be true that every argument, valid or invalid, is a substitution instance of some invalid argument form. 7 Lloyd Reinhardt tells a nice story about George Boolos from the time when they were logic students together at Oxford. Reinhardt was having trouble formalising a particular argument and asked Boolos if he could do it for him. Boolos: So let me get this straight, all you want me to do is put this in logical form? Reinhardt: That’s right, just put it in logical form. Boolos: o.k. p therefore q .

26

The Art of Necessity

One may hastily conclude from these points that it is never possible to determine that an argument is invalid by adducing some invalid argument form of which it is a substitution instance. This is not so, however. Just as there are some argument forms, the valid ones, that have only valid substitution instances, so there are others that have only invalid substitution instances. We might call these the Hyper-Invalid Argument Forms. The simplest example of these is (8).

(8)

p ∨ ¬ p

p & ¬ p

In general, these have not been much studied by logicians and they are pointed-up in no logic text of which I am aware. 8 In the propositional case these will be represented (or rather their corresponding conditional forms will be represented) by truth tables that have ‘false’ in every line of the final en- try column. 9 This will be the case when the conjunction of the premises has the form of a tautology and the conclusion has the form of a contradiction. If an argument is a substitution instance of a hyper-invalid argument form then it must be invalid, and could not therefore be a substitution instance of any valid argument form. One way, then, to determine that an argument is invalid is to try to find a hyper-invalid argument form that it instantiates; this will be one way of testing the argument. But if one can find no such form—and it is clear that the hyper invalid argument forms will be only a very small sub- class of the total set of invalid argument forms—then one is thrown back onto finding some other way to eliminate the possibility that the argument instanti- ates some valid argument form. In general, then, demonstrating invalidity is a matter of ruling out validity. Since determining invalidity is a matter of ruling out validity it is impor- tant to emphasise that this is a highly non-trivial matter. An argument may have a number of different forms in, say, the propositional calculus, as does argument (3) above, or it may have forms in a number of different systems— the propositional calculus, the modal calculus S5, the second-order quantifica- tional calculus, or what have you. These represent two dimensions in which the forms of an argument can be distributed, the depth and breadth of the

8 Neither Massey nor Stove note the existence of such forms, a fact that invalidates their general conclusions, or so I argue in section four. (The point was also not made in the earlier version of this chapter that appeared in Theoria .) 9 Thus argument forms—or better their corresponding conditionals—have a symmetrical structure: at one end there are those—the valid ones—that have only true as the final value, and at the other end those that have only false . The rest are simply the common or garden variety invalid forms; it is these that we are mainly concerned with.

Abductive Inference and Invalidity

27

structure, as it were. We have not determined that an argument is invalid un- til we have ruled out all of the valid forms in all of the systems. And further, since we have not yet exhausted a construction of all of the logical systems, we do not know whether we might have missed something of importance to the question of an argument’s validity once we have been through the valid argument forms in all of the known systems. The two dimensions of logical structure carry with them their own sepa- rate difficulties for the determination of invalidity. The difficulty of determin- ing whether an argument is not valid within a particular logical system we may call the synchronic problem; it consists of the observation that invalid argument forms can have valid substitution instances—finding an invalid ar- gument form is, in general therefore, necessary but not sufficient for proving invalidity. If an argument is not valid in some particular system—if we can find no valid argument form that the argument instantiates in the system— then we must look through all of the other logical systems. This we may call the diachronic problem. An argument is invalid tout court if and only if there is no valid argument form of which it is a substitution instance (that is, no valid argument form in any logical system). Logicians sometimes try to finesse these problems by speaking of argu- ments as, say, ‘not valid in the propositional calculus’, or even as ‘invalid in the propositional calculus’, and simply avoiding the question of whether an argument is invalid simpliciter. Sometimes it is one of these that is intended even though it is not said outright, for what is said is either false or misleading. Kleene, for example, says, ‘to show by truth tables that a formula is not valid, the table must in general be entered from the prime components’. 10 It is obvious however that truth tables would show at best that a formula is not valid in the propositional calculus—the question of whether it is not valid simpliciter has not even been addressed. (Unless, we take the term ‘formula’ as temporarily restricted to the formulas of the propositional calculus. But then it is clear that we are not dealing with arguments but symbolic expressions.) But the stronger locutions, such as ‘invalid in the propositional calculus’, are more deeply misleading still, even when they are explicit, for they hark back to the false idea that invalidity can always be determined within a partic-

10 S. Kleene, Mathematical Logic, London, Wiley (1967), p.14. The same mistake is made by Copi in Symbolic Logic (5th ed), Macmillan, N.Y. (1979), pp.19–25. It should also be said that entering from the prime components—or what Copi calls simple statements—will show at best that an argument is not valid in the system in which the prime components are prime. It will not show that the argument is invalid. The argument All men are mortal, Socrates is a man, therefore Socrates is mortal would be judged invalid if one entered from the prime components of the propositional calculus.

28

The Art of Necessity

ular system, as though there were invariable signs of its presence. But invalid- ity is not like a blemish on a carpet and logic is not a well-crafted stain detector. At best we can say that an argument is not valid in the propositional calculus we should not mis-state that as invalid in the propositional calculus. 11 Arguments do not waver in invalidity, being invalid in the propositional calculus but valid in S5 . An argument is either valid or it isn’t. To say that invalidity is not like a stain is to reiterate a basic point about logic. Logic is, properly speaking, a theory of validity—it is not a theory of invalidity and it is a mistake to think of it as one. Logicians are attempting to codify the valid inferences— every inference for which it is impossible for the premises to be true while the conclusion is false—but it is no part of their task to also produce a set of fallacious inference forms. Just as it is no part of the job of mathematicians to codify what is not mathematically true. The invalid arguments are simply the disorganised remainder, once we have taken away all the valid ones. But this means that the wisdom of the philosophical Enlightenment, with us to this day and enshrined in philosophical practice— that proof is hard while disproof is easy—is exactly the opposite to the lesson of logic, which is that proof is (relatively) easy, because it is rule-governed, while disproof—at least to the extent that it involves detecting invalid inferences— is hard because it isn’t. The œcumenical compromise is to say that proof and disproof are both as hard as one another, though for different reasons. Proof is hard because, even though the inferential rules are daily being discovered and systematized, it is difficult to find premises that are more assertable than the conclusion; disproof is hard because determining that an opponent’s argument is invalid is no easy matter. 12 To return then to the question of how invalidity is to be determined: the synchronic and diachronic problems represent two distinct obstacles that must be overcome if we are to conclude that an argument is invalid. How difficult

11 The distinction here is akin to the theological distinction between evil as an absence of per- fection and evil as the presence of something bad. If it’s the former, evil-detectors will have to be thoroughly acquainted with perfection; if it’s the latter they need only a nodding acquaintance with the bad. 12 John Bacon and Jim Franklin have suggested that the real problem is that we should only call an argument valid when the proponent of the argument names the rule of inference that is intended to generate the conclusion and the stated rule does indeed do so. I see no good reason to thus change our current terminology, but let us consider the proposal calling an argument licensed when a rule is correctly cited. Now one can indeed usefully indict an argument ( ad hominem ) as improperly licensed when the argument is valid but for the wrong reasons—but we will still be left with the question as to whether a given argument is valid or not. This is, after all, our prime interest. (And we must keep in mind how small our knowledge of inferential rules actually is!)

Abductive Inference and Invalidity

29

are these obstacles to overcome? The existence of a decision procedure for testing the validity of arguments in a particular logical system is often misrepresented as though it were a pro- cedure for determining invalidity tout court. Indeed it is this that is suggested by the misleading locution ‘invalid in the propositional calculus’. Generations of undergraduates have gone away from elementary logic courses with the idea that they have learned how to determine an argument’s invalidity. But they have not. Rather they have learned something with a more limited utility:

they have learned how to test the validity or invalidity of an argument form, a form that their given argument may have. If the form is valid then so is the argument, but if not then the argument may yet be valid or invalid—that is something the test was unable to discern. It is a symptom of the depth of this confusion that students are never asked to determine an argument’s invalid- ity by testing for the presence of a hyper-invalid argument form—which is at

least a sufficient, if not a necessary, condition. The idea that invalidity is deter- mined the other way has been so seductive that the presence of a genuine test, however partial, has been completely overlooked. We can bring out the problem with determining the invalidity of an argu- ment by imagining the procedures of a hypothetical Invalidity Machine that

valid?’ for any argument that is in-

serted into the blank. Let us suppose that the machine can access all the known logical systems. It is instructed to search through the hyper-invalid argument forms and, if it finds a form that the argument has, then it issues the answer ‘yes’ and the program terminates. If the answer is ‘no’ then it searches though the valid argument forms and, if it finds one, then the answer is ‘no’ and the program again terminates. But if it instantiates no such forms then the ma- chine simply has to go to the next system and repeat the procedure. When it reaches a system that has no decision procedure then it is forced to go through proof trees and it can never be certain that it has exhausted all the possibilities. Eventually then, and as we see it is sooner rather than later, the machine is doomed to go though an endless search. 13 If the argument was valid then there is a chance that the machine will terminate, but if the argument is invalid the machine will never be able to say that it is: it will never terminate.

We can summarize the conclusions of this section thus: there is no effec- tive procedure for determining the invalidity of an argument (as opposed to

is designed to answer the question

13 Of course we know that we do not have to go far before we reach that point, for although the first-order predicate calculus is complete, we know from Church’s Theorem that there is no decision procedure for it. When we reach more exotic systems we know, from Godel’s¨ Theorem, that the system may be incomplete.

30

The Art of Necessity

argument form in some particular system). Or better: the determination of an argument’s invalidity is not algorithmic. In the next section we consider the implications of this for philosophy and philosophical methodology.

2.3 I MPLICATIONS

The validity of an inference has been regarded as a matter of form at least since the time of Aristotle; accompanying this thesis there has been the shad- owy corollary that invalidity is also a matter of form. Medieval logicians, such as Peter of Spain and Robert Kilwardby in the late 13th Century, tried to bring out this corollary by developing a science of fallacy, taking their lead from some fragmentary remarks in Aristotle’s De Sophisticis Elenchis. In Kil- wardby we find the formality of invalidity becoming quite explicit.

reasoning, both demonstrative and dialectical, is the

source of recognising and discovering truth, and a careless person can be deceived in connection with either, logic must determine the deceptions that can occur in either of them so that they can be avoided and one may thus come to the truth more expeditiously. 14 ( My emphasis)

But

What is attractive in this picture is clear enough. If Robert Kilwardby had been right and it were possible to list the forms of deception, as well as the forms of correct reasoning, then, it is supposed, deductive logic would have been sufficient for all reasoning. It does not follow, of course—since it does not follow from the existence of a set of deductive forms that that set must be exhaustive of good reasoning—that is, it does not preclude the need for non- deductive probabilistic inference— even if we allow forms of invalid reasoning into the set. But what is important is that it was thought to follow: deductive logic, indeed Aristotelian syllogistic, was thought to give a necessary and suf- ficient set of rules for correct reasoning. This form of Deductivism seems to have had its epistemological image in the conflation of knowledge with cer- tainty, a conflation that had far-reaching, and well-advertised, consequences.

14 Robert Kilwardby, “The Nature of Logic: Dialectic and Demonstrative”, in The Cambridge Translations of Medieval Philosophical Texts, Vol.I, (ed) N. Kretzman and Eleonore Stump, Cambridge, Cambridge University Press, 1988, p. 271. Kilwardby became the Archbishop of Canterbury in 1272. Kilwardby’s view that the deceptions of reason could be determined had political consequences when he banned the teaching of certain propositions at Oxford. In all of this he seems to have been following the similar actions of Peter of Spain, when in 1267 he also had banned certain propositions in Paris. The 1277 Paris Condemnations are discussed in chapter 6.

Abductive Inference and Invalidity

31

Nevertheless it is the negative side of this Deductivism, the debunking role of reason, the detection of fallacies, that is the point at issue here. The issue is whether there is a simple way of recognising invalidity. As we saw in the previous discussion it is quite wrong to conclude that an argument is invalid merely because we can find an invalid argument form that it instan- tiates. It follows, then, that when we infer that an argument is invalid we must be making a judgement as to the non-existence of any valid argument forms for the argument in question. What basis there might be for this judgement is never mentioned, however, and were Deductivism true it is not clear that there could be one. To the extent, then, that philosophical analysis does not have a grasp of what would be required to make an inference to invalidity it is radically enthymematic at best. At worst it is simply incoherent. The pervasiveness of this error can be seen in the almost universal accep- tance of the argument from analogy, for this argument is identical to the mis- taken inference that arguments are invalid in virtue of their instantiating some invalid argument form. In an argument by analogy an argument is inferred to be invalid if a sec- ond argument can be produced which is both obviously invalid—because it has, let us say, true premises and a false conclusion—and that has the same form as the first argument. The implication is that, if two arguments have the same form and one is invalid, then the other must be also. This is not so, how- ever, as we’ve already seen. Sameness of form does not guarantee that two arguments will both be invalid if one is, because one argument may instanti- ate some further valid argument form that the second does not. The examples (3) and (1) are, in fact, of just this type. Sameness of form cannot guarantee invalidity because invalidity is not in general a matter of instantiating some particular invalid form. The argument from analogy thus rests on the same mistaken idea as before: that invalidity is a matter of form and that we can show an argument to be invalid if we can just produce some invalid form of which the argument is a substitution instance. One might, if one were making a judgement in haste, think that the argu- ment from analogy is valid when one is inferring the validity of one argument on the basis of its sameness of form to a second argument. After all validity, unlike invalidity, is a matter of instantiating a particular form: surely if one argument is valid then any argument that has the same form will be valid also. It takes only a moment’s reflection, however, to see that this is not so. If the argument from analogy will not work in the case of an inference to invalidity then it cannot work in the case of an inference to validity either, and for exactly the same reason. Valid arguments can instantiate invalid argument forms and an invalid argument may share that form. It does not follow, then, that two

32

The Art of Necessity

arguments must both be invalid or valid together if they have a common form. Indeed our original examples provide a case: (3) is valid but it has the same form, namely (2), as (1). It is true that valid argument forms have only valid substitution instances but one cannot mistake that for the claim that valid arguments have only valid argument forms. But to say this is, in fact, to do nothing more than to repeat that instantiating some invalid argument form does not guarantee that an argument is invalid. It doesn’t guarantee it because invalid argument forms can have valid substitution instances. 15

The pervasiveness of the fallacious Argument from Analogy suggests, I think, that philosophers are in the grip of a quite misleading view of the capac- ity of logical form to reveal an argument’s invalidity. The philosopher’s view of logic thus differs markedly from the logician’s, for, as I noted in the last sec- tion, the latter sees logic as a calculus of valid inference, whereas the former sees it as an instrument in a fundamentally dialectical process of argument and counterargument. Logicians have abetted this situation by suggesting— usually in the exercise sections of their text-books—that their decision proce- dures for validity and invalidity can be brought to bear on natural language arguments—the very stuff of the philosophical enterprise—and not merely on symbolic argument forms. They have thus encouraged the view that an argu- ment that has been translated into the logical notation of a system and not been found to be valid in that system can be said to be invalid in that system simpliciter —as though they were temporarily unaware of the importance of the placement of the negation in that claim. They have thus encouraged the near-universal acceptance of, what might be called, Kilwardby’s Error (what

I have also called Deductivism ): that logic can determine the errors of human

reason. It is this that has led to the idea that the proper roleˆ for logic is in the debunking of the arguments of others. But if the Argument from Analogy shows the ubiquity in philosophical method of the erroneous inference from invalid form to invalid argument, then

there are other, deeper, reasons to believe that philosophers chronically under- estimate the difficulty of knowing when an argument is invalid. Indeed, what

I called in the last section, the diachronic problem can thus seem to present

us with abysses that are deeper and wider than an exclusive focus on the syn-

15 Copi, in his Symbolic Logic (5th ed.), suggests that the argument from analogy is valid provided the specific forms of the arguments are considered (i.e. given by the prime compo- nents). This is not so, however. Specific form is system-relative (in Copi’s usage it is relative to propositional logic) and cannot tell one that an argument is invalid simplicitur. This is the same mistake as was mentioned above: treating not valid in the propositional calculus and invalid as if they were one and the same thing. Copi knows that they are not the same thing but only by conflating them can he justify the fallacious inference.

Abductive Inference and Invalidity

33

chronic problem might suggest. Indeed, what we have said so far does not really suggest how serious the problem is. There are a multitude of logical systems each trying to capture some infer- ential pattern in the complexity of natural reasoning. To make an assessment of an argument’s validity or invalidity that argument must be assessed in each of the logical systems. If the argument instantiates a valid argument form then the argument is valid; if we can find no valid argument form then we can con- clude that the argument has not been shown to be valid. It has not thereby been proven to be invalid. But surely, one might protest, the process of going through the logical systems need not always be a tedious matter of checking the forms of every such system? Indeed it need not. The process is consider- ably shortened by our ability to recognise that certain logical systems may not be relevant. If an argument contains no modal operators, for example, then we know that the inference, valid or invalid, is not a matter of modal logic. (We must recognise, however, that it may not always be clear what terms do function as modal operators.) The formalisation of logical inference allows us to see how the various terms in a natural argument could bear on its validity. But to concede this is not to concede very much. We cannot resolve all dis- putes as to the validity or invalidity of an argument by simply sorting through the known logical systems. This is because we do not know everything that is relevant to validity. There are many examples of arguments whose valid- ity has been in dispute, either because it was not clear how to render them in the formal system of the day or because no known formal system would accommodate them. Descartes’ Cogito Ergo Sum is a familiar example. This argument has been taken to be circular and invalid—something it could not possibly be since a circular argument is ipso facto valid—as well as straightfor- wardly invalid, and straightforwardly valid (though not at the same time). It has also been taken to be valid but requiring a new logical system to capture the essence of its inference structure. Hintikka has proposed a performative logic to try to capture what many have found elusive in the argument. The details and the prospects of this proposal need not concern us here, however. What is important is that there remain inferences whose symbolisation is un- clear. It follows that, as long as there are unclear cases, we will be unsure whether an inference whose invalidity is in dispute is not simply one of those cases. An inference to invalidity always carries such a risk, and it is a risk that should not be minimised. The diachronic problem is, then, not as easy to finesse as might first appear. The development of logic is certainly not yet completed—indeed it may seem

34

The Art of Necessity

to have only just begun—and an inference to invalidity is inherently fallible. 16 The fallibility of such inferences is something that we will return to in the final section in more detail but it is worth pointing out immediately that this fallibility becomes positively vertiginous when one takes into consideration Kripke’s argument for a posteriori necessities. To my knowledge it has not been previously recognised—even by Kripke himself—that the existence of a posteriori necessities, such as that water is identical to H 2 O, entails the existence of valid arguments whose validity is only discovered a posteriori. For example, the argument ‘A, therefore water = H 2 O’ is a valid argument for any proposition A whatever, since the con- clusion is a necessary truth. We only discovered that this argument form was valid, however, when we discovered the necessary truth that water is H 2 O. We thus have a whole new category of valid arguments, namely those that are a posteriori valid. As science advances we can expect to discover more such ne- cessities and therefore more a posteriori valid arguments. Logic has taken an empirical turn in a wholly unexpected way. Arguments that were thought to be invalid, prior to Kripke, such a ‘Bush is the President of the U.S. therefore wa- ter = H 2 O’ turn out to be valid after all. Relevance logicians will undoubtedly already have turned pale at this suggestion—or perhaps they will simply take it as more grist to their mill—nevertheless, the point remains that we cannot know which arguments are valid until we at least know which propositions are necessary. This empirical turn makes it clear that the diachronic problem is as deep as the problem of knowledge itself. We cannot know with certainty that an argument is invalid until we have ruled out the possibility of its being valid. Our beliefs about which arguments are valid will change as our knowledge of the world changes. It is not merely the unfinished state of our knowledge of logical form that is at issue but the unfinished state of our knowledge tout court. We have at best fallible knowledge of the class of invalid arguments and some judgements of invalidity will shift as our non-logical knowledge shifts. In the next section I will take up the broader issues concerning the impact of Deductivism, and then take up the question as to whether the problem that has been posed for determining invalidity doesn’t entail some serious inconsis- tency. I will then return to the question of the nature of the fallible inferences to invalidity in the final section of the chapter.

16 Witness the development of higher-order logics, topological logics, new versions of modal logic, temporal logics, dynamical logics, and many more. If one sees logic as the codification of all inferences that fit the standard definition of validity—and I think that is how it should be seen—then the number of potential logics is probably infinite. Thus even without a posteriori valid inferences there will be enough to be going on with.

Abductive Inference and Invalidity

35

2.4 I NCONSISTENCY

The mistaken belief that invalidity can be easily recognised has created philo- sophical confusions and shaped philosophical practice. The self-image of phi- losophy as the art of debunking the views of others goes back at least as far as Socrates. When logic began to be formalised under Aristotle it became ab- sorbed into this adversarial enterprise, with the consequence that it was put to use as a way of exposing the errors in an opponent’s position. Provided one is cautious in making these negative assessments there is nothing inher- ently wrong with this. But when deductive inference becomes the whole of reasoning such a view becomes untenable. In the medieval period we have the beginnings of such an exclusive reliance and it continued on through the En- lightenment, up to the present day, where deductive reasoning has often been taken to consist in nothing more than the lower predicate calculus. 17 A direct effect of the psychological mind-set created by Deductivism is that sceptical arguments about non-deductive modes of reasoning, such as in- duction, were able to be stated and take hold in a way that would have been impossible if directed against deductive reasoning. Hume’s famous argument, for example, about the alleged circularity of attempts to provide a foundation for inductive inference can be recast against deductive inference just as well. To see this, note that we explain the validity of deductively valid argu- ments in terms of the impossibility of having the premises true and the conclu- sion false. But what is the basis of this impossibility? If the explanation is that the premises are necessarily false, then the problem becomes that of saying why it is impossible to have the necessarily false premises true and the conclu- sion false. If the explanation is that the conclusion is necessarily true, then the problem is that of saying why it is impossible to have the premises true and the necessarily true conclusion false. If, finally, the explanation is that the con- clusion is somehow contained within the premises, then the problem becomes that of saying why it is impossible to have the premises true and that which is contained within the premises false. We might feel that the blatant incon- sistencies here are our reason for believing that these impossibilities cannot obtain. But we cannot appeal to that inconsistency without the threat of cir-

17 Deductivism in the medieval period included a number of modal principles of classical entailment, mentioned in the previous chapter: from a necessary truth only necessary truths follow; from an impossibility everything follows, etc. These were exceptionless generalisations and thus would have licensed the inference that it was invalid to, for example, infer a contingent proposition from a necessary one. Such meta-logical inferences can be found often in post- medieval empiricism and they play a central role in Humeanism. This is a matter that I return to in the final section.

36

The Art of Necessity

cularity , without begging the question, without using logic in order to justify logic. Thus we must assume some principles of logic to provide a foundation for logic. For a sceptic like Hume, however, that would be no foundation at all. We can imagine him saying that he asks only for the foundation of this inference, that there is required a medium which may enable the mind to draw such an inference and that if it proceed by means of reason—for which read ‘logic’— then it must evidently be going in a circle, and taking for granted that which is the very point in question. That we have not been saddled with a deductive scepticism to rival induc- tive scepticism is little more than an historical accident: Deductivism shaped the thinking of philosophers from so early in the piece that deductive scepti- cism was unable to get a toe-hold. Inductive scepticism, on the other hand, was not only thinkable, it was plausible—after all, if it wasn’t deductive reasoning, and it couldn’t be founded on deductive reasoning, then it must be groundless, or circular—which amounted to the same thing. Deductive logic, however, re- quired no such grounding since it was self-evident. The threat of circularity fell on stony ground. A modern Humean, such as Karl Popper, could allow himself to take deductive inference for granted, while inductive inference was ‘without foundation’. But history is not monolithic, even if looking backward from the stand- point of its effects can make it seem so. During the 17 th Century the Aris- totelian logic of the Schoolmen came under attack from some Cartesians. This attack seems to have been directed less against logic itself than against its dom- inance in the schools. One Gabriel Wagner, however, went further: he argued that formal logic in its entirety was unnecessary, frequently wrong, and that it should be abandoned wholesale. Instead he advocated natural reason free from the distortions of formalisation and systematisation. In the next section I will briefly discuss one recent ‘Wagnerian’, D.C. Stove, and urge that Wagner- ianism is the wrong line to take against Deductivism. 18 Let me now return to my main theme, which is the nature of invalidity, since it is necessary to pursue the general insignificance of form from a differ-

18 Gabriel Wagner published his attacks in 1696 in the weekly Vernunft ubungen.¨ These ar- ticles brought him to the attention of Leibniz who replied by letter in the same year. Leibniz argues, with great show of moderation, that formal logic has its uses, that it is however fre- quently misused, and that the counterexamples that Wagner had proffered rely in the main on equivocation. For my purposes, however, the most striking feature of Leibniz’s reply is his emphasis on the difficulty of determining invalidity. After making clear that this is something that requires much more skill than it is usually thought to involve, Leibniz stresses that logic is more useful for proof than exposing errors of reason: ‘I lay little importance in refutation but much in exposition.’ G. Leibniz: Philosophical Papers and Letters, (2nd ed), (ed.) L.E. Loemker, D. Reidel, Dordrecht (1969), pp. 462–471.

Abductive Inference and Invalidity

37

ent direction. If an argument has true premises and a false conclusion then it is invalid. This is a sufficient condition for invalidity, but not a necessary one. An invalid argument can, therefore, sometimes be recognised from the actual truth values of premises and conclusion. This gives us an epistemic handle on invalidity in

a small subset of the total set of cases. We have no such handle, however, in

the case of validity. This is because an invalid argument can have any of the possible distribution of truth-values that are allowable: the conjunction of the

premises can be false and the conclusion either true or false, or the conjunc-

tion of the premises can be true and the conclusion again either true or false.

It follows from this that, since invalidity is compatible with all possible distri-

butions of truth values, since, as it were, no actual distribution of truth values rules out invalidity, that one can’t detect a valid argument from the presence of a special set of truth-value distributions to premises and conclusions. 19 One

cannot say: this argument must be valid because it has such-and-such a distri- bution of truth-values to premises and conclusion for no invalid argument can have that distribution. Valid arguments cannot then be recognised by actual truth-values, whereas invalid arguments can, in certain circumstances. This

is another way in which the asymmetry between validity and invalidity mani-

19 Note that my point here concerns actual truth values, not their modal status, so it is irrele- vant whether an argument is valid because it has a contradiction in the premises or a tautology for its conclusion. These cases have no bearing on the asymmetry that I am concerned with here.

38

The Art of Necessity

fests itself. 20 Because actual distributions of truth-values do not afford us with a means of recognising valid arguments in any instances we are forced back on to a modal condition. This is the counterfactual definition of validity. It says: an argument is valid if and only if, were the premises true, the conclusion would have to be true also. We can derive from this a (negative) counterfactual defini- tion of in validity: it is not the case that were the premises true the conclusion would have to be true also. It is the distinctive feature of these definitions that they apply to arguments and not to argument forms, at least on the face of it. Do we then have a pair of definitions that obviate any need to speak of form at all? Not so: the modal intuitions for the counterfactual conditions entangle with logical form in awkward ways—ways that logicians rarely spell out. 21

20 In an unaccountable slip E.J. Lemmon completely garbles the account of validity given in his book Beginning Logic, Van Nostrand Reinhold, Wokingham, 1965. He says, on p.2, describ- ing invalid arguments, using the terms sound and unsound where we use valid and invalid,

thus in the argument (3) Napoleon was French; all Frenchmen are European; therefore Hitler was Austrian all the propositions are true, but no one would say that the conclusion followed from the premises. The basic connection between the soundness or unsoundness of an argument and the truth or falsity of the constituent propositions is the following: an argument cannot be sound if its premises are all true and its conclusion false. A necessary condition of sound reasoning is that from truths only truths follow. This condi- tion is of course not sufficient for soundness, as we see from (3), where we have true premises and a true conclusion but not a sound argument.

Lemmon is of course quite wrong: it is both necessary and sufficient for sound reasoning that from truths only truths follow. Example (3) does not show failure of sufficiency because in (3) the conclusion does not follow from the premises, as he himself has noted immediately following the statement of it. Lemmon is equivocating on the term ‘follow’, which in the two occurrences above is used correctly. The final quoted sentence, however, evacuates the sense to include conclusions that do not properly follow . It is unclear what Lemmon could have had in mind here: was he thinking of the missing sufficient condition as a Relevance-type constraint? But the only point that the example supports is the one that I made above: invalid arguments can have true premises and true conclusions and validity cannot be established from actual truth values. But Lemmon’s slip here is not without consequence. He is not led, here at any

rate, into a fruitless search for the missing sufficient condition, but he is led to overemphasise

the formality of the determination of invalidity, saying that logic may be defined as

the

study, by symbolic means, of the exact conditions under which patterns of argument are valid or invalid’ (p.5). Lemmon’s mistake is doubly strange for it was he who clarified the modal logic of Lewis’ systems of entailment. (For a self-referential wrinkle in this question of the sufficient conditions of validity see the next chapter.) 21 I will use the terms counterfactual account of validity and modal account of validity inter- changeably.

Abductive Inference and Invalidity

39

Prima facie the validity counterfactual is a condition that applies to ar- guments and not argument forms. We apply the counterfactual test in the standard way by looking to the closest possible worlds in which the premises are true and noting that the conclusion must be true in those worlds also. (In fact we do not need to fuss about nearness in this case since in any possible world true premises guarantee a true conclusion.) We vary the truth-values here while keeping the argument constant. This can be understood in contrast to the way validity is assessed by form: the substitution instances are varied while the form is kept constant. Thus a truth table assesses the validity or in- validity of a form by listing the truth-valuations of the possible substitution instances of the form. It is irrelevant to the invalidity of a form that some sub- stitution instances may be valid. In truth tables a valid substitution instance corresponds to a restriction of the full table for the form to just those lines that are allowable in the substitution instance. (In the earlier example (3) cor- responds to a restriction of (2) in which only two lines are relevant, those in which p has the same value as q .) This problem with valid substitution in- stances of invalid forms is not present in the counterfactual criterion of va- lidity since it is the argument itself that is there described. Varying the truth values of the argument across possible worlds is a different process to varying the actual values of substitution instances of some argument form. In an ideal world these different processes would be kept distinct. As a number of people have pointed out, however, this is not an ideal world. Although the counterfactual definition of validity gives the best account of the validity of an argument it does not give us an adequate epistemic handle on the matter. Intuitions about other possible worlds are no more manageable (and no less manageable) than the straightforward intuition about validity. The judge- ment that an argument is valid is just the same as the modal intuition that were the premises true the conclusion would have to be true also. So although the counterfactual account of validity is conceptually ade- quate (and indeed the most appropriately general account of validity that we possess) it falls short of providing us with a decision procedure for arguments. This, after all, is what we wanted at the outset and have had so much trouble getting. The logician sometimes solves this problem by a sleight-of-hand: he simply identifies the assessment of a form with the counterfactual assessment of the argument itself, explaining the truth-table, say, as giving the possible truth values of the argument itself rather than the argument form. 22

22 E.J. Lemmon’s presentation in Beginning Logic (op.cit.) uses just such a sleight-of-hand (see pp. 64–67). I think that students often become uneasy at this manœuvre but can’t put their finger on the reason for their discomfort.

40

The Art of Necessity

However, this conflation of arguments with argument forms is illicit in general and fraught with danger. I discuss it in detail in the next chapter. For present purposes, however, the main problem is the one that we’ve been dis- cussing throughout this chapter, namely the existence of valid substitution in- stances of invalid argument forms. If we pretend that a decision procedure for argument forms is a decision procedure for the validity of arguments them- selves, we will be in danger of assessing a valid argument as invalid because we assess the wrong form. We also face the diachronic problem: a decision pro- cedure for an argument form does not tell us that an argument is invalid sim- pliciter but merely that it is not valid in some particular logical system. How, then, could it give us an epistemic handle on what is going on in other possible worlds? It cannot tell us whether these very premises imply this conclusion. The conflation of a decision procedure for an argument form with a decision procedure for the counterfactual criterion of validity creates a chimera of cer- tainty about the determination of invalidity where no real certainty exists. However, although this conflation has the undesirable consequence that problems with determining invalidity through form flow into the intuitions about the counterfactual criterion for invalidity, there is at least one conse- quence that leaks back the other way. The counterfactual criterion focuses the mind on the argument rather than some argument form that is instantiated. When, for example, we ask ourselves whether argument (3) is valid we are psychologically less apt to think of the form (2) if the mind is holding the coun- terfactual criterion in focus. The counterfactual criterion gives us some grasp on validity prior to specification of forms. If we did not have any modal intu- itions here it would be hard to see what the formalisation of logic was trying to capture. There must be some modal intuitions that the logician is attempting to formalise. 23 These intuitions are, of course, fallible and defeasible, but they underwrite our pre-theoretic judgements of validity in such a way that we are guided in the direction of the appropriate form. Yet although the counterfactual condi- tion for validity has a beneficial effect on thinking about form it also makes it plain how desirable it is that the conditions for validity, counterfactual and

23 The essentially counterfactual and modal nature of validity stands hard against Dorothy Edgington’s view that conditionals, indicative and subjunctive, do not have truth-conditions. (See ‘Do Conditionals have Truth-Conditions?’ in Conditionals, (ed.) F. Jackson, Oxford, Ox- ford University Press, 1991, pp.176–201). The fact that deductive validity will be lost on her account is partially masked by an ambiguously worded and unsatisfactory account of validity which disguises its modal, or counterfactual, nature (p.201) and the fact that her main, but not sole, interest is in indicative conditionals (p.178). This is, however, not the place for a detailed discussion of these issues.

Abductive Inference and Invalidity

41

formal, be kept distinct. If form is allowed to rule over our modal intuitions then we shall soon have none of the latter left. The counterfactual definition of validity takes the argument itself as its object whereas the formal account is directed toward the forms that the argument has. These different objects are reason enough alone to keep them distinct. Yet now that we have them distinct we can use them to state and answer a puzzle about the determination of invalidity that has threatened from the outset. In brief it is the problem of self-consistency—it is the application of the synchronic problem to itself. Invalid forms can have valid substitution instances. Suppose, contrary to fact, that our only means of determining the invalidity of an argument was by looking at the forms of the argument. How, then, could we have discovered that it is invalid to infer the invalidity of an argument from an invalid form? In short how could we have discovered that the following is invalid:

(9)

A instantiates invalid argument form f

A is invalid

for it is the invalidity of (9) that has been the point at issue throughout this chapter. Our inference, that it is invalid to infer invalidity in this way, would seem to be undercut! But note that if (9) is itself an argument form (with A and f variables) then the mistaken belief that every substitution instance of it is invalid is an example of the very fallacy itself. It is self-instantiating! In fact this seems to be the mistake made by both Massey and Stove, as indicated above in section two. They have wrongly concluded from the invalidity of (9) that every instance of it is also invalid. But there are some instances of (9) which are valid, namely when A instantiates a Hyper-Invalid Argument Form. So (9) is an invalid argument form that has valid substitution instances! So how do we know that (9) is invalid? The answer, of course, is that it has invalid substitution instances (for recall that no valid argument form can have invalid substitution instances) which we know to be invalid because they have true premises and false conclusions. There is no difficulty, therefore, determin- ing that (9) is invalid. The self-consistency of our reasoning to invalidity is assured. It is important to remember that our problem was, after all, not how do we know that certain argument forms are invalid, but rather how do we know that certain arguments are invalid, when we have no help from actual truth values. The point was that we cannot simply advert to their instantiating an invalid argument form: we have only our belief that they do not instantiate any valid argument forms. These considerations do throw up one noteworthy point, however. A scep- tic might try to draw the conclusion that invalidity is always undeterminable

42

The Art of Necessity

on the basis of the arguments in this chapter. That would be a mistake. If inva- lidity cannot be determined at all then it cannot be determined that it cannot be determined. The sceptic would have no reasoning on which to base such a sceptical conclusion. Scepticism therefore is the wrong conclusion to draw. The right conclusion is that invalidity can be difficult to determine, that, above all else, it cannot be deduced. Thus an inference to invalidity is essentially fallible and defeasible—but rational for all that. This is taken up in the next section.

2.5 I NFERENCE

Invalidity cannot, in general, be recognised by finding some invalid argument form that the argument instantiates; instead the invalidity is a matter of there being no valid argument form—discovered or as yet undiscovered—that the argument instantiates. Inferring that there is no such form will be a risky mat- ter given the incomplete (and incompleteable) state of our knowledge of va- lidity. Indeed, the question presents itself: can we ever really know that an ar- gument is invalid—leaving aside those that are hyper-invalid or actually have true premises and a false conclusion? As suggested in the last section many may feel the pull of a sceptical con- clusion here: in general an argument cannot be known to be invalid. This would be a satisfyingly dramatic conclusion and its implications for philoso- phy would probably be startling enough to need little further comment. The critical edge of philosophy would be much blunted if no arguments could be determined to be invalid. Such a scepticism would leave us with a subject, and yet with not enough in the way of cognitive grip to carry out work on it. It is hard to see how philosophy could survive such a scepticism. Fortunately, as I argued in the last section, this sceptical conclusion is not only undesirable, it cannot be maintained on the basis of the argument of this chapter. Indeed, I have been arguing that a particular invalid argument is rather too pervasive. The conclusion of this chapter could not then be that invalidity cannot be determined. The point is not that invalidity cannot be determined but rather how, given that we are able to make such judgements, do we succeed in doing so? In short: what is the epistemology of invalidity? We have already seen that there is a striking asymmetry between validity and invalidity. There is no distribution of truth-values to premises and conclu- sion that would be sufficient to guarantee an argument’s validity, and yet the instantiation of a valid argument form is sufficient—and if there are no anoma- lous valid arguments, necessary as well. By contrast, there is a distribution of truth-values that will be sufficient to guarantee an argument’s invalidity—to wit, true premises and a false conclusion. The instantiation of a hyper-invalid

Abductive Inference and Invalidity

43

argument form is also a sufficient condition of invalidity, but there is no form, the instantiation of which would be a necessary condition for invalidity. Thus we can show that an argument is invalid by showing that it does not satisfy the necessary condition for validity, i.e. by eliminating from contention the valid argument forms that it might satisfy. And if it does not satisfy one of the sufficient conditions for validity that is the absolute best that we can do . Fur- thermore, we know that this eliminative inference to invalidity is fallible in that future discoveries, linguistic and scientific, can lead us to revise our previ- ous assessments of invalidity. However, if we wish to show that an argument is valid then, since no actual distribution of truth-values will suffice, we must fall back on the instantiation of a valid argument form. The epistemology of invalidity has been much hampered by the psycho- logical attractiveness of the mistaken view that invalidity and validity are at root symmetrical and that therefore our epistemic access to validity has, as its mirror image, a similar epistemic access to invalidity: if one is formal then the other must be as well, if adducing a form will work for one then it should work for the other as well. This symmetry thesis is one manifestation of the view that I earlier labelled Deductivism since it is an aspect of the failure to recognise that our epistemic access to invalidity is not deductive, that we are not prov- ing invalidity. Making room for our epistemic access to invalidity means that Deductivism must be seen as the false doctrine that it is. Deductive reasoning is not, therefore, and cannot be, all that there is to reasoning or we could not determine the invalidity of a great many invalid arguments. There must, then, be legitimate fallible reasoning that falls outside deductive reasoning and that underpins our ability to use deductive reasoning in the way we do. In particular, I maintain that the inference that an argument is invalid is an inference to the best explanation, that it is essentially abduc- tive in nature. We infer that an argument is invalid when we have searched through all of the possible forms for the argument and failed to turn up any that are valid. The failure to turn up a valid form is best explained by the ab- sence of such forms. Our conclusion that an argument is invalid is, of course, both provisional and defeasible. We may not have exhausted all of the features relevant to validity in our production of argument forms, and may therefore have wrongly concluded that the argument is invalid. But even though the inference to invalidity is fallible it need not be irrational or unwarranted. The abductive inference to invalidity is fallible because all such inferences are fallible. When we run through all of the possible forms for the argument we are trying to exhaust the logical structure of the argument, thus exhausting the possible ways that it might be valid. It is perfectly the same as in scientific contexts. We think it is reasonable to believe an explanation when it is the best

44

The Art of Necessity

that has been found for a particular phenomenon. 24 In this case the best explanation for one’s failure to turn up a valid argu- ment form is that the argument is invalid. Likewise, if we are looking for an explanation of the fire then it is perfectly reasonable to eliminate the possible causes to get at the actual cause. As Sherlock Holmes says, ‘When we have eliminated the impossible, Watson, then anything that is left, however improb- able, must be the truth.’ 25 (Of course Holmes mistakenly calls this method deduction thus, unfortunately, contributing to the sway of Deductivism even among fictional characters!) We are ruling invalidity in by ruling validity out. Since there are no infallible markers of invalidity, we must make do with the absence of the markers of validity. Even in the formative stages of our understanding of inferential rules ab- ductive inference is not entirely absent. The very ability to learn and use a lan- guage requires a great deal of non-deductive reasoning. The need for charity in interpreting others is simply disguised inference to the best explanation and induction. We understand others by applying reasonable maxims to their utter- ances. At the most fundamental level, then, non-deductive inferential methods rule our understanding. Our ability to reason is not something apart from the ‘scientific’ methods of inference, but something that rides on top of those non- deductive modes of inference. Or, to change the image, deductive inference only survives in an atmosphere of non-deductive inference. Our grip on language carries with it some rudimentary grasp of valid infer- ence. Natural languages do not fit our formalised logics perfectly, as logicians themselves emphasise, or we would not need to be so careful in translating from English to symbolese . Yet there is some match-up or we would not suc- ceed at all. This grasp that we have on valid inference does not immediately convert, however, into a grasp of invalid inference. When we discern that one sentence follows from some other sentence we are discerning some relation be- tween the two, in general some necessary connection. This is what we mean when we say that there are marks of validity, something that can be discerned upon inspection. It does not follow that there are marks of invalidity. Invalid- ity consists of the absence of those necessary connections that sustain valid

24 This is akin to an eliminative induction. Suppose we have an urn with a mixture of coloured balls and we make a number of withdrawals (without replacement) and draw no white balls. Then the hypothesis that the urn contains no white balls is inductively supported and the prob- ability increases with each draw. 25 All readers of Conan Doyle seem to automatically correct for the oddity in his way of expressing this point, for he obviously, falsely, assumes that there will be only one explanation left once the impossible is eliminated. He really intends that we eliminate the improbable to discover the probable.

Abductive Inference and Invalidity

45

inference. In general, there are no marks of invalidity’s presence, merely the absence of marks of its absence—to put it so as to bring out the full existential pathos. At this stage the reader may feel that only half the job has been done. Surely, he wants to say, we need also to know more about how these non- deductive inference patterns work. This is a perfectly reasonable request, but not unfortunately when made here. My sole aim has been to argue for the necessity of such inferences, to indicate how the application of deductive infer- ence has been warped by the neglect of these other modes of inference; how deduction has been asked to do too much. 26 Indeed my intention was to argue that the promotion of deductive logic as exhaustive of reason was inherently contradictory. The determination of an argument as deductively invalid is not itself deductive—it requires the use of abductive inference. This account of the epistemology of invalidity is an advance, I think, over the suggestions of Gerald Massey and David Stove. Although both authors discuss the synchronic problem they are both a little unclear, I think, as to its implications. Indeed their suggestions as to how the problem is to be overcome are altogether too vague if left as they stand. From Massey we have only the claim that we ‘scrutinise’ the argument to determine its invalidity. He says, in fact, But suppose none of the arguments we devise for some proposition p strike us as valid. What then? Do we need a theory of invalidity to discredit them? By no means! That these arguments seem upon careful reflection to be invalid is reason enough to abandon them and to look elsewhere for a good argument

for p . It is much the same with arguments propounded by

upon close scrutiny seem invalid are best set

These appeals to ‘careful reflection’ and ‘close scrutiny’ seem a rather ret- rograde step in Massey’s otherwise excellent essay since they make it appear, once again, as though there is something internal to an argument that is able to be seen upon scrutiny to be the positive mark of its invalidity—a form, in other words! And this after Massey has, rather relentlessly, pointed out that there is no such thing. What does he think we can possibly see when we scrutinise? David Stove’s positive suggestion amounts to little more than an appeal to intuition. We have intuitive assessments of validity and invalidity and these assessments are sufficiently robust to ground our agreement on logical matters. In fact he is against the formalisation of logical systems because at bottom we can do no better than our intuitions. ‘Cases rule!’ is the slogan that summarises

Those that

(p. 496)

26 And, anyway, it has been done perfectly well elsewhere; see K. Josephson and Josephson Abductive Inference (London: Routledge and Kegan Paul) 1995.

46

The Art of Necessity

his anti-formalistic attitude to assessments of invalidity and validity. His is a thoroughly Wagnerian attitude to logic. (Indeed one might call him the Perfect Wagnerian. ) Yet, although I think intuition plays a role, it does not, and cannot, be all that there is to the story, for it does not give us a reasonable procedure for settling disputes about invalidity. If all we have are appeals to brute intuition we have nothing to say to someone who disagrees with our assessments of invalidity—no reasoning process that we can employ with them. Stove’s posi- tion is not sceptical—as he himself insists—but it leads to it in only one move. And if it offers no reasonable process for settling disputes as to invalidity it also offers no account of how those intuitions work. There is no reason, after

all, to think that our brute intuitions are in fact correct. A proof of the sound- ness of a system is both justification of the process of formalisation as well as

a ratification of our intuitions. Neither Stove’s groundless intuitions nor Massey’s careful scrutiny upon nothing provide an adequate account of our ability to make assessments of invalidity. Seeing the process as an inference to the best explanation allows us to see how determinations of invalidity can be arrived at and agreed upon. Such determinations can be rational and yet fallible—much what we should think of philosophy itself.

2.6 I NFIX

I have been emphasising that the determination of the invalidity of an argu-

ment is a matter of inference to the best explanation—and if we restrict our- selves to making judgements from forms that is so. But in some cases we have additional information, that is non-formal in nature, that can be used to facili- tate a determination. In fact this additional information was mentioned in chapter one: it is Ock- ham’s summary of the principles of entailment. The significant items are:

8)

From something necessary something contingent does not follow.

9)

Something impossible does not follow from something possible.

These two are significant because they allow us to identify invalid argu- ments. For obviously if we have an argument that has all necessary premises and a contingent conclusion then we know, by (8), that it cannot be valid. Like- wise if we have premises that are co-possible, then we know that any argument to an impossible conclusion cannot be correct (by 9).

A BDUCTIVE I NFERENCE AND I NVALIDITY

47

In fact the special exception that we mentioned earlier in this chapter— any argument form with a tautologous premise and a contradictory conclu- sion will have only invalid substitution instances—can easily be seen to be a special case of a combination of 8 and 9, in the terms that are recognised by propositional logics. But 8 and 9 offer us an ability to recognise invalidity in cases that go be- yond this special exception—where we have any set of necessities, or any set of co-possible statements, for premises. Then, in the first instance we know that any contingent claim cannot validly follow; while in the second we know that any impossibility cannot validly follow. So there are some instances where philosophy is able to make decisive judgements as to the invalidity of an argument—provided there is agreement as to the modal status of the con- stituent claims. Yet although the preceding discussion staves off scepticism it does not leave our picture of philosophy entirely unaltered. Many philosophers have seen philosophy as equipped with special tools to do a special job. The anal- ysis of an opponent’s position, using the subtle determination of invalid and fallacious reasoning, has seemed to many to be a different job to that done by the sciences. Even philosophers of a naturalist bent have taken it to be their task to contribute to the sciences with tools that only philosophers know how to wield, the philosopher and the scientist may have a common goal but their tools, their intellectual equipment, are different, and thus their role in the en- terprise of knowledge is different. On this view, philosophy exists above the sciences, determining the fallaciousness of reasoning with methods that result in certainties; the flashing razor of logic being essentially sharper and cleaner than the messy, blunted, methodology of the natural sciences. 27 We have seen that this is not so, however. The natural scientist and the philosopher share the same logical tools—inference to the best explanation underpins both. Thus the last difference between philosophy and the empiri- cal sciences vanishes, along with the comforting illusion that philosophy is a broker of certainties. Philosophy is not a special subject, not a privileged meta- discipline, it is just a part of our single epistemic enterprise, with common aims and using common methods.

27 Popper tried to ‘correct’ the natural sciences by suggesting that they should make do with modus tollens as their single methodological rule, the rule of falsification. They were thus ad- vised to eschew inductive reasoning in favour of a deductively sanctioned rule. He did not seem to realize however that any mistaken deductive inference required the reintroduction— into the sciences no less than into philosophy—of the kind of non-deductive inferences that were supposed to have been eliminated. The sciences should always have responded, therefore, by asking Popper to try and live with the same restrictions that he asked of them.

Chapter 3

Validity and Necessity

ow that we have considered invalidity at some length, we turn to the notion of valid inference—which will be our exclu- sive concern for the next two chapters. I will be arguing, in essence, that the attempt to formalise the concept of valid in- ference by employing set-theoretic notions leads to considerable difficulties. I begin by laying out the traditional understanding of the concept— the one that all philosophers are familiar with—and discuss medieval attempts to clarify the concept. I then begin to trace the reform movement of the Twenti- eth Century (which I attribute to the joint influence of Tarski and Quine) and examine John Etchemendy’s argument for the non-equivalence of the tradi- tional concept with the reformer’s set-theoretic surrogate. In chapter four I go deeper into the problems with the resulting entanglement between set theory and logic.

the resulting entanglement between set theory and logic. 3.1 T HE C ONDITIONS OF V ALIDITY

3.1 T HE C ONDITIONS OF VALIDITY

The use of valid inferences predates any theory of what it is for an argument to be valid, but that did not prevent the pre-Aristotelian philosophers from disputing the inferences of others. It is clear, for example, that Pythagoras and Thales in the Sixth Century bc had a working understanding of the notion of valid inference even though no theory of validity has survived. Indeed the absence of a theory probably attests to the widespread agreement on what counts as a valid inference, for definitions tend to arise as a way of settling otherwise unsettlable disputes; if there are no such disputes there is no need for definitions. Self-conscious theories of validity seem to have arisen mainly as a response to the Sophists, who, if tradition is to be believed, gave trans-

48

Validity and Necessity

49

parently fallacious arguments that had a similar form to an opponent’s valid inference—thereby attempting to discredit them. (I will forbear from repeat- ing the last chapter’s injunctions against this move!) Plato, it appears, took this challenge seriously and it provoked him to consider the link between the premises and conclusion of a valid argument. His considered view was that the necessary connections between sentences, the entailment of one sentence by others, is a reflection of the connections between the Forms to which the premises and conclusion refer. Hence if the Forms are, in effect, generaliza- tions of the Euclidean abstractions point, line, plane, etc, then logical infer- ence is a generalisation of geometrical inference. (This model of necessary con- nections has, of course, recently been taken over by Armstrong, Dretske and Tooley as a way of explaining causation and laws of nature.) Underlying this Platonic conception of valid inference there is the germ of the modern conception. We may state this modern conception as follows:

Val: An argument A is valid iff it is impossible that its premises be true and its conclusion false.

In one sense this is vaguer than the Platonic conception since it does not men- tion the ontological ground of this impossibility; it would take the rise of a full semantics for Modal Logic before one could feel that the definition had been adequately spelt out. On the other hand it is more satisfying than the Platonic conception as it does not require one to know, or even pretend to know, the strange mechanics of coupling Forms. Val is at the heart of modern logic and is stated explicitly in nearly every textbook. But is it true? Let us break the biconditional down into its two implicational halves, thus obtaining a necessary and sufficient condition for validity.

NecV: An argument A is valid only if it is impossible for its premises to be true and its conclusion to be false.

SuffV: An argument A is valid if it is impossible for its premises to be true and its conclusion to be false.

The first conditional here, NecV, looks to be true beyond all doubt, but what of the second? Is it really true that any argument for which it is impossible that it have true premises and a false conclusion is valid? Consider the following argument:

(1) Either snow is white or snow is not white

This argument is invalid

50

The Art of Necessity

If this argument is valid then, since its premise is necessarily true, its con- clusion must be true as well. But the conclusion says that the argument is invalid. So we have the consequence that the argument is invalid if it is valid. Suppose then that the argument is invalid: then, by SuffV, it must be possi- ble for the premise to be true and the conclusion false. But we know that the premise is a necessary truth so that only leaves the possibility that the conclu- sion is false. But if the conclusion were false then the argument would be valid. On the condition that SuffV holds and the argument is supposed to be invalid we can show then that the argument is valid. But this is no paradox since we can deny SuffV ; if we do we have the conclusion that it is impossible for the premise of (1) to be true and the conclusion false (since both are in fact neces- sary truths) and yet the argument is invalid—as the conclusion says. SuffV is false. In (1) we have an argument that is invalid but which satisfies the condi- tions for a valid argument; there must then be some condition for validity that has been omitted. Of course that does not mean that we have any reason to doubt that our conditions for validity are not necessary : NecV still looks to be perfectly true. Though the conclusion drawn from argument (1) may be novel, (1) itself is not. In a slightly different form it was included in the group of problems known as the Insolubilia and can be found in the treatise Perutilis Logica of 1350 by Albert of Saxony. In Albert of Saxony’s original the argument was

(2)

God exists

This consequence is not valid

but it is clear that the premise is intended to be a necessary truth and any such will do. The argument can also be found in the work In Universam Logi- cam Quæstiones attributed to the author now known only as Pseudo-Scotus (though he may have been a virtual contemporary of Albert). 1 It may there- fore go back to Jean Buridan who was known to be Albert’s teacher and a major contributor to the Insolubilia literature. Albert of Saxony draws a rather different conclusion from (1) than I have. He argues that, although (1) is invalid the conclusion that states that it is in- valid is not true: things are as the conclusion signifies them to be and yet the conclusion is not true. By this means he can save SuffV since he can maintain that the argument is invalid—though, rather paradoxically, it is not true that

1 William and Martha Kneale speculate that he may have been one John of Cornwall. See The Development of Logic (Oxford: Oxford University Press), 1962, 1984. p. 771.

Validity and Necessity

51

it is invalid. The argument is invalid even though it is not possible for the ar- gument to have true premises and a false conclusion. In modern parlance we would probably say that Albert’s solution to the self-referential paradoxes is to reject one implicational direction of the Tarski biconditionals. Coincidentally, the height of the discussion of the Insolubilia , around 1350, occurred at the same time as the first and most devastating outbreak of the Black Death. It was this same major outbreak that was responsible for the deaths of William of Ockham, Thomas Bradwardine and Albert of Saxony. Indeed it is plausible to suggest that this also marks the end of the Medieval world proper. For the Medieval outlook of the Eleventh to Thirteenth Cen- turies was largely based on the idea that the order and reasonableness of the world was a reflection of God’s reason and order. To see that order break down in the Plague years and the reasonableness fracture in the light of the para- doxes was to court Atheism. Increasingly from the Fourteenth Century, belief rests on faith rather than reason, a reliance that was finally enshrined in the views of Luther, Calvin and Wyclif and the Protestant Reformation. Although the simplest of the Insolubilia, the Liar Paradox, was revived in the Twenti- eth Century in the light of the Set Theoretical paradoxes the discussion has rarely extended to arguments like (1) above. Truth has thus come to seem like an urgent and intractable problem whereas validity is assumed to be a set- tled matter. This seems an indefensible division: if truth has its problems then surely logical truth will also. 2 To return then to my main argument, SuffV is false: we do not have a set of sufficient conditions for the validity of arguments. It is worth noting, how- ever, that Anderson and Belnap’s arguments against Classical Logic make the same point; they think that the Classical notion of validity enshrined in Val is insufficient and must be supplemented by Relevance conditions. My argument is entirely neutral on the question of relevance conditions—but the argument above from (1) against SuffV is, I think, rather stronger than Anderson and Belnap’s trading of intuitions on the so-called paradoxes of implication. 3 If SuffV is false how can we be sure that NecV is not false as well? Perhaps there is some exotic argument, self-referential, or otherwise monstrous, that is valid even though it is possible that it have true premises and a false conclusion. Indeed, perhaps there is some argument that is valid even though its premises are actually true and its conclusion actually false. May the following argument

2 There is a significant disanalogy between truth and logical truth and that is that necessities are usually taken to be properties of propositions whereas truth is often taken to be a property of sentences. Propositions cannot be paradoxical—on their usual construal. If this is correct then (2) is not valid because the corresponding conditional proposition is not a necessary truth. 3 I discuss Anderson and Belnap’s views further in chapter five.

52

The Art of Necessity

not be an example?

(3)

This premise is true

This premise is true

The suggestion is that this argument has the form p therefore p and all sub- stitution instances of that form are valid. Yet the premise can be consistently assumed to be true while the conclusion is definitely false, since it contains the false presupposition that it is a premise. 4 But as tantalizing as it is to suppose that (3) is an example of a valid argument with a true premise and a false con- clusion, the case will not, I think, work. This is because it is not really plausible to suppose that the argument is, in fact, an instantiation of the form p therefore p . The indexicals make it clear that whatever proposition the premise asserts it is not the same as the proposition asserted by the conclusion, as is clear if one tries to paraphrase the sentences without the indexicals. (3) is not therefore a counterexample to NecV. A more promising suggestion comes from David Stove.

(4)

All arguments with true premises and a false conclusion are invalid This argument has true premises and a false conclusion

This argument is invalid.

Stove thinks that, although it is clearly valid, this argument has true premises and a false conclusion: (4) is therefore supposed to be a direct counterexample to NecV. 5 Is Stove correct? I think not. His line of thinking seems to be that if the argument is valid—and its form surely suggests that it is—then, if its premises were true, its conclusion must be true as well. But the conclusion of (4) cannot be true if (4) is valid and so we have an argument that is valid but is such that if the premises were true then the conclusion would have to be false. And so we have a counterexample to NecV . This reasoning is mistaken, however. Stove assumes that the premises of (4) are true and then deduces a contradiction between that assumption and the conclusion being true. But there is no reason to assume that both premises are true. Given that the conclusion is false (as it must be if the argument is valid)

4 Note that the sentence is a near relative of the Truthsayer sentence ‘This sentence is true’ which can consistently be taken to be true or false. In the Kripkean scheme it is ungrounded and therefore not made true at the minimal fixed point of the inductive truth-definition, yet it will be true at other fixed points. 5 D.C. Stove The Rationality of Induction, (Oxford: Oxford University Press), 1986. Stove, very characteristically, refers to such examples as Pornology . For such things is he much missed.

Validity and Necessity

53

it is perfectly plausible that one of the premises should be false—and since it

cannot be premise one it must be premise two. But if premise two is false then its first conjunct is false—and so it is indeed false in an entirely self-consistent

way. Argument (4) does not show that NecV is false. It is simply an example of

a

valid argument with a false premise. We may conclude, for want of any convincing counterexample, that NecV

is

true—but note that this is once again an inference to the best explanation—

and that our standard notion of validity holds good: there is a modal connec- tion between the premises and conclusion of a valid argument. 6 In the remain- der of this chapter and the next we wish to examine how this is explicated (or fails to be explicated) in current Model Theory. In his closely argued book, The Concept of Logical Consequence, John Etchemendy argues that the standard presentation of semantics in First-Order Model Theory—that is, the view that derives from Alfred Tarski’s famous 1936 paper (itself called ‘On the Concept of Logical Consequence’)—fails to capture the essence of our intuitive notions of validity and logical truth. It fails to do so since it does not, contrary to Tarski’s own claims, capture the modal na- ture of these concepts. Thus, if Etchemendy is correct, and I will argue that he is, the standard way of rendering the modal nature of these concepts tractable

6 Pseudo-Scotus has three attempts to give necessary and sufficient conditions for valid con- sequence. His first is simply our Val : a consequence is valid iff it is impossible for the con- junction of the premises to be true and the conclusion false. He gives a counterexample to this proposal, as follows: every proposition is affirmative therefore no proposition is negative. The argument is valid but, though the conclusion is false (indeed, self-contradictory) the premise is not obviously false. I disagree with the assessment. In any world in which the premise is true the conclusion will be false. Pseudo-Scotus’ second attempt at a formulation is as follows: a consequence is valid iff it is impossible for things to be as the premises signify them to be without also being as signified by the conclusion. Against this Pseudo-Scotus has another not altogether convincing counter- instance: no chimaera is a goat-deer so a man is an ass. Here Pseudo-Scotus appears to be making a, now, rather standard complaint about relevance. (Though the example is actually invalid. Possibly the example was meant to be: a chimaera is a goat-deer so a man is an ass. Then, were things as the premises signify them to be there would still be no reason for things to be as the conclusion signifies them to be.) The third account of validity says: an argument is valid iff it is impossible that the premises should be jointly true and the conclusion false when they are both formulated together. Against this suggestion Pseudo-Scotus brings the counterexample (2): God exists, therefore this conse- quence is invalid. The Pseudo-Scotus then remedies this third account by adding to the clause ‘except for the single case in which the sense of the consequent is inconsistent with the sense of the inferential particle which marks the existence of the consequentia’. It is a response that is obviously ad hoc, but presumably a full response would connect us up with the Liar Paradox. See W. and M. Kneale The Development of Logic (Oxford: Oxford University Press), 1962, pp. 286–7 for their views on the Pseudo-Scotus’ arguments.

54

The Art of Necessity

is shown to be mistaken. Our First-Order Logic is all at an angle to the intu- itive notion of valid consequence and logical truth that it was attempting to formalize. In the next section I will give an elementary example of the problem that Etchemendy has analysed and show that there is a solution in this simple case. This I think will bring out more clearly why the problem is more serious in other cases; I also wish to draw a rather different moral from that which Etchemendy himself draws.

3.2 T HE A MBIGUITY OF T RUTH TABLES

It was pointed out in the previous chapter that there are two quite different ways of interpreting truth-tables. One can either see them as testing proposi- tional forms for logical truth and validity; or one can see them as testing propo- sitions themselves for the same characteristics. It is the first view, however, that has become the standard view among logicians. We can best illustrate the difference by looking at the following sample table.

S

S’

not S’ S and not S’

true

true

false

false

true

false

true

true

false true

false

false

false false

true

false

If we think of S and S’ as propositional variables into which propositions can be substituted then the table is determining the possible values of the com- pound propositions. Each line of the table will then represent different propo- sitional substitutions. For example, line one might be the case when S is the proposition snow is white and S’ is the proposition grass is green. Line three, however, might be the case when S is the proposition snow is red and S’ is the proposition the sky is blue . The final column tabulates the results of these different propositional substitutions. On this interpretation of the table the ex- pression S and not S’ is a function in two variables with arguments taken from the set of propositions and values in the set {true, false}. Call this the schema interpretation. On the second interpretation of the truth table S and S’ are actual proposi- tions, say snow is white and grass is green respectively, and the table records the different combinatorial possibilities that result whese propositions are al-

Validity and Necessity

55

lowed to vary in truth values. Each row of the table thus represents a different possible world and the last two columns record the way that compound propo- sitions are determined by their constituent atomic propositions. Line three, for example, represents a world in which snow is not white but grass is green. On this interpretation the expression S and not S’ is not a function in two vari- ables it is simply a proposition which will vary in truth value from one world to another. We could, therefore, think of the proposition (changing the mathe- matical picture slightly) as a function in one variable, from the set of possible worlds to values in the set {true, false} . In chapter two I called this the coun- terfactual, or modal, account of the truth table. It is clear enough that the two interpretations are quite different. In prac- tice, however, philosophers will often simply slide between the two concep- tions, using the first as a way of explaining the formality of logical truths, and the second as a way of explaining their necessity. But this should be, indeed must be, a judicious vacillation, since the two conceptions will only coincide in rather special circumstances. For example if S were itself a tautology then there would be no plausible interpretation for lines three and four of the above table on the counterfactual interpretation—for S could not then be false. In practice, therefore, only contingent propositions are chosen when one wants to move to the second interpretation of the truth table. But even this precaution is not enough to prevent absurdities from arising. For suppose that S were ‘Mary is at least six feet tall’ and S’ were ‘Mary is at least five feet tall’; then there is no genuine possible world where S is true and S’ is false, and hence the second line of our previous table is illegitimate and S and not S’ is false at every line—and therefore an impossibility. But since it imports a mathematical claim it could not quite be described as a logical falsehood. However whether it be a logical or a mathematical truth—or some graceless hybrid of the two—the example makes it clear that one cannot always interpret the full truth table as giving us the genuinely possible worlds, even when the individual propositions are contingent. It is clear, then, that the second, modal, interpretation of the truth table will only generate the full array if the propositions are chosen carefully: they must be contingent and independent of one another. It is not necessary to take this precaution if one is working with the first interpretation of the table. But that is not to say that some care is not required here as well. The full truth table ar- ray will only be generated on this schema interpretation if there are sufficient substituens. In practice this means that we cannot be restricted to the actual sentences of a language, since the expressive resources of a language are contin- gent; instead we must think of the substituens as abstract entities: propositions, which will be the meanings of all possible sentences. If the language in use hap-

56

The Art of Necessity

pens to fall short—and since its expressiveness is contingent there is no reason why it should be maximal—then possible sentences are standing by to fill-in the gaps. Both interpretations of the truth table, therefore, require abstracta— either in the form of propositions, or in the form of possible worlds—but it is also clear that these abstracta are doing quite different jobs. On the first inter- pretation of the truth table they are needed to ensure that we can say all that it is possible to say of the actual world; on the second interpretation they are needed so that we can say all that we want to be able to say of the non-actual possible worlds. So not only will the two interpretations generate the same ta- bles only under quite special circumstances, their metaphysical underpinnings are really quite different. If the two interpretations will only coincide in special circumstances and have such different ontological needs, then the question arises as to why they are so easily conflated. Perhaps the answer lies in the fact that we can readily be induced to make the slide from actual values of possible sentences to possi- ble values of actual sentences—since they seem, superficially, to be so similar. Indeed this slide, or something very like it, occurs in the two original presen- tations of the Truth Table method that appeared simultaneously in 1921—in Wittgenstein’s Tractatus and Emil Post’s Doctoral Thesis, published as the pa- per ‘Introduction to a General Theory of Elementary Propositions.’ 7 When Wittgenstein introduces truth tables, around proposition 4.31, it is with the ex- plicit recognition that the lines correspond to the different possibilities of truth of the given propositions. As he says at 4.3: ‘Truth-possibilities of elementary propositions mean possibilities of existence and non-existence of states of af- fairs’. Thus for Wittgenstein the ‘possible worlds’ are generated as the combi- natorial possibilities inherent in the set of states of affairs. When Wittgenstein comes to the end of the proposition 4 series, however, he announces that ‘The general propositional form is a variable’ (4.53) and throughout the 5 series propositions he is concerned with schemata and logical form. By 5.54, in fact, Wittgenstein declares that ‘In the general propositional form propositions oc- cur in other propositions only as bases of truth-operations.’ Although he does not say that the lines of the truth table now correspond to different proposi- tions it would have been open to him to do so—since the notion of substitution is now available to him. 8 Without being fully aware of the magnitude of the

7 Reprinted in From Frege to Godel:¨ A Source Book in Mathematical Logic, 1871–1931 (ed.) Jean van Heijenoort, Cambridge, Mass., Harvard University Press, 1967 pp. 264–283. 8 In 5.01 he says, in fact ‘Elementary propositions are the truth-arguments of propositions.’ This is as close as he comes to enunciating a general principle of substitution. The general concern with formality is enunciated at 5.13 where he says ‘When the truth of one proposition follows from the truth of others, we can see this from the structure of the propositions.’ c.f.

Validity and Necessity

57

change Wittgenstein has shifted from the second, modal, interpretation of the truth table to the first interpretation. In Emil Post’s paper the slippage goes in the other direction. Unlike Wittgen- stein, Post states the principle of substitution at the outset—and also notes that the principle was omitted from Principia Mathematica by Russell and Whitehead (this, perhaps, being the cause of Wittgenstein’s haziness on the subject)—but it is very difficult to see whether the symbols that he introduces to stand for propositions are dummy-names or variables. At the beginning of the paper they are called variables and treated as such, but when the truth ta- bles are introduced the lines are referred to as possibilities. 9 Thus he says: ‘So corresponding to each of the 2 n possible truth configurations of the p’s a defi- nite truth value of f is determined’ (p. 268). But it is very unclear whether these are the possible truth values of some definite proposition or whether they are the actual truth values of the many possible propositions. Nor is it difficult to understand why a certain amount of unclarity might have been an advantage here. After all, the first interpretation of the truth table makes the proposition schemata intelligible (so that it generates the table), only if there is an implicit quantification over abstract entities, i.e. propositions. But given that we now have two interpretations of truth tables the ques- tion naturally arises as to the relation between them. 10 We know that they will not coincide, except under rather special circumstances. Given that a table is set up to test some propositional schema, say S and not S’ , for tautologous- ness, what can we infer from the resultant table about a proposition with that form—in particular, can we tell that it has, or lacks, some desirable modal characteristic? Fortunately, the answer to this last question is ‘yes, we can.’ The reason is that even though the two interpretations will not always gener- ate the same table, the modal interpretation will always generate a sub-table of the other. Thus whenever a particular form is tautologous, a proposition

5.131, also 5.54. 9 Lower-case letters are called variables and the upper case letters P and Q seem to be dummy-names of propositions. But Post uses the lower-case letters almost exclusively and treats them as though they were propositions in their own right. By ‘possibilities’ Post may have meant epistemically possible (alternative) actual propositions. 10 Copi, for example, says in his Symbolic Logic 5-th edition: ‘there is nothing necessary about the truth of B [Balboa discovered the Pacific Ocean]. But the truth of the statement B∨¬ B can be known independently of empirical investigation, and no events could possibly have made it false for it is a necessary truth. The statement B ∨¬ B is a formal truth, a substitution instance of a statement form all of whose substitution instances are true. A statement form that has only true substitution instances is said to be tautologous , or a tautology .’ (p. 27). The implication is that the necessity of B ∨¬ B is explained by the tautologousness of p ∨¬ p , but the fact that the former will be represented by a line of the truth table of the latter only explains why it is true , it says nothing to suggest that it will be a necessary truth. For that one needs a further argument.

58

The Art of Necessity

with that form will always have every line of the truth table ‘true’, even if the substituens requires less lines. In other words, when the schema interpre- tation designates a form tautologous or valid the modal interpretation will designate a substitution instance of that form a necessary truth. (A point of which much was made in the previous chapter: valid argument forms have only valid substitution instances; so tautologous forms have only necessary truths—i.e. tautologies—as substitution instances.) The reverse need not hold, however, and we have already seen an instance of its failure. S and not S’ is not a tautologous schema but if we were to sub- stitute for S ‘Mary is at least six feet tall’ and for S’ ‘Mary is at least five feet tall’ then the resultant proposition would be a necessary truth, though not a logically necessary truth. This is simply because the one line that has ‘false’ as the final entry on the schema interpretation cannot count as legitimate on the modal interpretation. All of which is simply to say that there are more necessary truths than simple the logical truths. However, the temptation to conflate the two interpretations has had the result that some philosophers have fallaciously concluded that all necessities must be logical necessities. Because they think that the formal criterion of tau- tologousness is equivalent to a statement of logical necessity they have thought that all propositions are free to vary independently of one another—the only constraint being logical constraint. Wittgenstein makes this mistake in the Tractatus. Early on, at 2.061, he asserts that ‘States of affairs are indepen- dent of one another’ and at 2.062 that ‘From the existence or non-existence of one state of affairs it is impossible to infer the existence or non-existence of another.’ This independence eventually turns into the thesis, after much dis- cussion of the significance of logical form, that ‘Just as the only necessity that exists is logical necessity, so too the only impossibility that exists is logical im- possibility.’ (6.375). But unless one is very liberal with the meaning of the term ‘logical’—so that mathematical and conceptual necessities count as logical 11 — this claim seems false on the face of it and is unsupported by any argument. Wittgenstein has simply been seduced by the dual nature of the truth table and conflated formality with necessity. To return, though, to our main line of argument: we have seen that there is a connection between the two different semantics for truth tables, and indeed that the connection is another manifestation of the point that was discussed in the previous chapter: valid inference forms have only valid substitution in- stances. But by giving us a bridge between formality and necessity it gives us

11 see 6.3751. Wittgenstein is driven to this implausible view precisely because he has already made the mistake that I indicate.

Validity and Necessity

59

a justification (in the case of a single system) for our intuitive notion of validity, outlined in § 2 above—and, perhaps even more importantly, it gives us an epis- temic handle on modal concepts, something that we might have thought that we did not have. 12 The important question to which we now turn is whether there is a way of extending this argument to other logical systems, including the most philosophically significant system, the first-order predicate calculus. It is this question that is addressed by John Etchemendy in his book The Con- cept of Logical Consequence. But before we go on to give Etchemendy’s argument it is worth recast- ing the above argument using his terminology. What I have called the modal interpretation of the truth table, Etchemendy refers to as a representational semantics; the other interpretation—what I, looking to preserve the connec- tion with the previous chapter, have simply called the schema interpretation— Etchemendy refers to as a substitutional semantics which is a species of inter- pretational semantics. (Thus far I have not used Etchemendy’s terminology because I think that it gives a misleading impression that the two semantics are interpretations of the very same entity—whereas the object is different in the two cases: in one case it is a formula with a number of free variables, and in the other it is an actual proposition.) In a representational semantics a sentence’s meaning—the proposition it expresses—is taken to be fixed and its truth is determined by its relation to the world—or, if it is compound, by the way its component sentences contribute to its truth. 13 Thus when we think of the sentence having a different truth-value in an- other possible world we are assuming that the only thing that has changed is the world that it has a relation to—it still means what it does in the actual world. All of which is just to say that the proposition is fixed and the ‘world’ is the variable. (It is worth adding that trans-world entities are often taken to be metaphysically suspect, but propositions do need to be genuinely trans-world

12 Though we must be careful in our estimate of how much of our intuitive idea of logical necessity we have captured here. From one perspective we can see ourselves as having simply determined the necessity of logical truths, but if we turn it around and ask what modal logic we might obtain in this way, then the answer is that it is astonishingly weak, being Lemmon’s S0.5 . 13 Of course there is another sense that can be attributed to the phrase representational se- mantics in which no one would think of a truth table as providing one, as Etchemendy also notes. That the sentence ‘snow is white’ states the truth that snow is white is due to extra- logical facts of linguistic construction, reference and predication, and none of that is unpacked by the truth table. The truth table simply assumes that the semantics for the language is given and fixed and then looks at how the truth value of the sentence with that meaning would fare in different possible worlds. It is thus as though we carried the sentence across into different possible worlds and watched the effect on the sentences of which it is a constituent.

60

The Art of Necessity

for possible worlds even to be intelligible.) We can begin to see the problem by noting that, in the paper ‘On the Con- cept of Logical Consequence,’ Tarski gave a very general argument for the coincidence of the two kinds of semantics, or, more accurately, for the ability of an interpretational semantics to guarantee the necessity of logical truths in a representational semantics. We look at this argument and Etchemendy’s re- sponse after we have described the extension of interpretational semantics to the predicate calculus.

3.3 S UBSTITUTIONAL S EMANTICS

The argument given in the last section to connect the representational (modal) semantics of propositional logic with a substitutional (schema) semantics clearly depends on the truth-table decision procedure for that logic, and, in particular, on the ability of the truth-table to be interpreted in two different ways. But there is no reason to expect that argument to extend to other logics where there is no decision procedure, or where there is no possibility of ambiguity. This is why the suggestion that there is an argument that will connect these different semantics for all logics should make us particularly suspicious and vigilant. But to present Tarski’s argument, and Etchemendy’s response to it, it will be necessary first to give a brief account of the idea behind substitu- tional semantics, and thereby Tarski’s motivation for developing his account of satisfaction and truth, as we now find them in standard model theory. The idea of a substitutional semantics for a propositional logic is reason- ably clear. An expression such as p∨¬ p is a function with a single variable p . Propositions are substituted into the variable position to yield a definite com- pound proposition with a definite truth value. It is analogous to taking an expression like x 2 + x + 4 = y and substituting particular numbers into the variable position x to yield a value for y. We can make the analogy closer by thinking of the propositional formula as p∨¬ p = y, where y can take only the values true or false. Tautologies are then the constant functions with the value true for all substitutions; contradictions are constant functions with the value false for all substitutions—just as f ( x) = x – ( x + 2) is a constant numerical func- tion. 14 Substitution works by having a set of substituends which constitute the domain of the functions, and where the range is some set of values. The idea of extending substitution to propositional expressions is often attributed to Bolzano, but it is at least implicit in all earlier discussions of logic going back

14 Propositional logic is unusual in that we are simultaneously considering a set of functions which have a different number of variables.

Validity and Necessity

61

as far as Aristotle. Generalizing this account to formulæwith different sets of terms taken as variable is now reasonably intuitive. We decide which sets of terms are going to count as fixed and which are going to count as variable, and we assign sets of substituends, from the stock that the language has to offer, to the variables. Thus we must think of the language as broken down into lists of grammat- ically similar terms. On the one hand we have the list of fixed terms F and on the other, sets of lists of substituens for the variable terms V i . Thus under normal, and familiar, circumstances the list F might consist of the following terms:

and, or, not,

, if and only if, some, all.

The entries for the variable types V consists of a many-columned set of lists V 1 , V 2 , V 3 , etc .

names

predicates

functions

Mary

heavy

plus

George

white

square

London

round

multiplication

.

New York

King

.

.

 

.

Sydney

.

.

.

.

.

A logical truth with respect to some particular choice of fixed terms is a true sentence which remains true whenever any terms in the lists V 1 , V 2 , V 3 , is substituted into the appropriate place in the sentence. The logical parsing may be coarse—so that all possible indicative sentences are entered into a sin- gle list of variable terms (and with F as above) as in propositional logic—or it may be very fine, so that all terms are treated as variable and none are fixed. Under the coarsest possible parsing all terms are fixed and none are variable. This gives us a partial ordering (indeed a lattice) of possible logics, with a dif- ferent set of ‘logical truths’ for each choice of fixed and variable terms. The minimal element of this partial order has no terms fixed, while the maximal el- ement has every term fixed. Thus for the maximal element every true sentence is a ‘logical truth’ since there are no allowable substitutions: it is therefore vac-

62

The Art of Necessity

uously true that every true sentence remains true under every substitution into the variable places. Again, we might consider the analogy with algebraic expressions. The func-

tion f (x) = x 2 + 4x +2 has one variable and three functions that are fixed in meaning: exponentiation, multiplication and addition. These play the role of ‘logical’ constants. In addition there are the non-logical constants ‘1’, ‘4’ and ‘2’. If we are looking for a more general form for this function we might con- sider it to be nx 2 + ux + v. We obtain the original function by filling in particular values for the new variables ‘n’, ‘u’, and ‘v’. Yet this function form is only an instance of a more general function form a u x n + b v x m + c w —and even this can be generalized further if we do not treat exponentiation, multiplication and addition as fixed. Thus, depending on what we treat as semantically fixed and what we treat

as constant, we get a lattice of different expressions that vary in the number and character of their variable positions. This partial ordering of ‘logics’—the scare quotes indicating that many will be so bizarre as to not really deserve the name—will exist for every given language, that is, for every given vocabulary of terms. We can thus think of there as being two parameters that must be fixed to determine a logic: firstly, the fixed and variable terms must be specified, and secondly the language must be broken into sets of terms and apportioned to the variables. When we have done this we arrive at the basis for a ‘logic’: it may be a predicate logic, a tense logic, a modal logic, or something even more exotic. 15 We have two things here that we must be careful to distinguish: a recipe for constructing generalised logical languages, and a statement about the way to consider and treat variables that is modelled on the way we treat them

in general algebraic situations. Yet both conflict to some extent with our pre-

theoretic intuitions. In the former case the conflict arises because we don’t think that we should be completely free to choose what is regarded as a logical

constant: surely there should be some general constraints that limit what can count as a logic ? If we choose every term to be fixed then, as already noted,

all true sentences will be ‘logical’ truths and every true material conditional will correspond to a ‘valid’ argument. Yet surely this is not really a logic?

A genuine logic requires some connection to exist between the premises and

conclusion of a valid argument—if the premises are true then the conclusion must be true. How has this necessity been captured in this limiting case of a

15 I do not claim that the above is sufficient to make a logic, however. If one can find no plausible rule(s) of inference then perhaps what we have are simply idle schemata. This element is missing from Etchemendy’s account.

Validity and Necessity

63

logic? As we will see, this is the question that is at the heart of the dispute over Tarski’s account of logical consequence. But the second problem is that we think that logic should not be dependent on the resources of the language that we use: if we have a set of logical truths then, even if the language had been less expressive, the logical truths should have been the same. After all, we think of the logical truths as being a species of necessary truths, and we can hardly think that these vary according to the linguistic resources of one group of language users. Surely the necessary truths are not going to be conditional on such contingent facts as how rich a vocab- ulary a language has? Yet it is precisely this dependence that substitutional semantics forces upon us. To illustrate with a very artificial example: suppose that we have the non- logical sentence ‘New York is a city’ and we make both proper names and predicates into variable terms so that in this instance the form for the sentence is Fa and both F and a are variables. Consider the two toy languages below with their lists of variable terms laid out in two columns.

L

1

New York

 

London

Sydney

Jack

L

2

New York

 

Sydney

is larger than a mile square

is heavily polluted

is a city

is less than six feet tall

.

is larger than a mile square

is a city

.

Under L 1 the sentence ‘New York is a city’ is not a logical truth since the admissible substitutions yield ‘New York is less than six feet tall’ or even ‘Jack is heavily polluted’ and both of these are false. But note that with the same choice of fixed terms—namely, the empty set—but a more restrictive language, ‘New York is a city’ does turn out to be a ‘logical’ truth. For if L 2 gives the available substitutions then the sentence remains true no matter what we substitute for the name and predicate of the sentence. So with the same set of fixed terms the sentence ‘New York is a city’ is a ‘logical truth’ for one language but not for the other. It is worth seeing why these problems didn’t arise when we were consid- ering propositional logics and truth tables. The first reason is that obviously the logical constants are fixed as the truth functional connectives. The sec- ond reason is that there is a tendency to consider the substituends as abstract entities—propositions—rather than sentence tokens. In fact even when this is

64

The Art of Necessity

not explicitly done it tends to be smuggled in implicitly in the form of possible sentence tokens. This is why no one worries about truth functional compounds that are so complex that the corresponding truth tables would require more sentence tokens than there are or have ever been. There are always possible sentence tokens standing-by in the wings. When we move to the model theory of first-order quantificational languages we encounter the same problem: how to define the substituends so that any lim- itations in our lexicon don’t distort our account of logical truth and valid con- sequence? Frege’s own development of quantificational logic by-passed these problems in the same way that we saw in the propositional case—the way that

is implicit in the analogy with algebraic expressions: consider the substituends

as abstract entities. Indeed we might consider that the need for such objects as propositions and concepts is due to two separate demands: 1) the need for entities that are lexically individuated, as sentences and their components are,

and, 2) not subject to the contingencies of definition and finiteness in the way that sentences are. Only if there were something that met those two conflicting demands could logic be developed formally , on the analogy with mathematics. Tarski’s account of satisfaction is intended to meet this need in a more decisive way. We will see how it was intended to work in the next section.

3.4 F IRST O RDER M ODELS

Model Theory begins with a particular, restricted, language—a first-order lan- guage —and proceeds to find interpretations for that language in particular

structures. We begin, therefore, by discussing what makes up a first-order lan- guage. (I go through these, seemingly, elementary points in order to make a few remarks that are not often made. It can be skipped without loss to the main argument.) We start with symbols that form the alphabet of the language. These come

in two kinds: logical symbols and non-logical symbols. The former consist of i)

variables: x, y, x 1 , x 2

ii) quantifiers and connectives; iii) the equal-

ity symbol ‘=’ (optional); iv) punctuation symbols: brackets. The non-logical

symbols of the language consist of i) predicates: F, G, F 1 , F 2

(with subscripts,

function symbols f, g , f 1 , f 2

as becomes necessary). In addition, the predicate and function symbols have an -arity which can be denoted with a superscript and which indicates the number of term position required by the predicate or function. So, for exam-

ple, the successor relation is a 2-arity, or binary, predicate, while ‘human’ is

a 1-arity, or monadic, predicate. Addition and multiplication are 2-arity func- tions.

, y 1 , y 2 ,

G 1 , G 2

;

g 1 , g 2

; constants: a, b,

Validity and Necessity

65

Once we have the alphabet we can form expressions. The progression here

is from terms to atomic formulas to (general) formulas. Terms are either con-

stants, or variables, or functions of n-arity, followed by n number of constants

or variables. The terms play the role of nouns in the first-order language. An

atomic formula is of the form i) t = s where t and s are both terms (if ‘=’ is a log- ical constant); ii) an n-arity predicate followed by n terms. A general formula

is either an atomic formula, or a negated formula, or of the form a b , where