; rs
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqtvwxyz{|}~R F~̣XCompObj\WordDocumentTObjectPoolˣXˣX FMicrosoft Word 6.0 DocumentNB6WWord.Document.6;
Oh+'0+1=
IU]
eq :Macintosh HD:Applications:Microsoft Word:Templates:NormalWIDER STILL AND WIDER...Christia MercerChristia Mercer'@X@v@ܥhO e7Tu#,,,0$b
b
b
b
b
~
X
(b
Aq
$$$$$$&X
JA$n|$$$$A$ $
$$$$$$d8(
$$$Graham Priest
Validity
1 Introduction: Approaching the Problem
1.1 The Nature of Logic
Knowledge may well, in the last analysis, be a seamless web. Yet it certainly falls into relatively well-defined chunks: biology, history, mathematics, for example. Each of these fields has a nature of a certain kind; and to ask what that nature is, is a philosophical question. That question may well be informed by developments within the field, and conversely, may inform developments in that field; but however well that field is developed, the question remains an important one, and one that will pay revisiting. It is such a revisiting that I will undertake here.
The field in question is logic, one of the oldest areas of knowledge. The nature of this has been a live issue since the inception of the subject, and numerous, very different, answers have been given to the question what is logic?. To review the major answers that have been given to this question would be an important undertaking; but it is one that is too lengthy to be attempted here. What I do intend to do is to give the answer that I take to be correct. Even here, it is impossible to go into all details. Indeed, to do so one would have to solve virtually every problem in logic! What I will give is the basis of an answer. As we will see, there is enough here to keep us more than busy.
1.2 Focusing on Validity
What, then, is logic? Uncontroversially, logic is the study of reasoning. Not all the things that might fall under that rubric are logic, however. For example, the way that people actually reason may, in some profound sense, be part of the ultimate answer to the question of the nature of logic (think of Wittgenstein in the Investigations), but logic is not about the way that people actually think. The reason for this is simple: as a rich literatureif not common sensenow attests, people frequently reason illogically. Logic does not tell us how people do reason, but how they ought to reason. We will return to the question of the ought, here, later. For the present, let us cede the question of how people actually reason to psychology.
The study of reasoning, in the sense in which logic is interested, concerns the issue of what follows from what. Less cryptically, some thingscall them premisesprovide reasons for otherscall them conclusions. Thus, people may provide others with certain premises when they wish to persuade them of certain conclusions; or they may draw certain conclusions from premises that they themselves already believe. The relationship between premise and conclusion in each case is, colloquially, an argument, implication or inference. Logic is the investigation of that relationship. A good inference may be called a valid one. Hence, logic is, in a nutshell, the study of validity.
The central question of logic is, then: what inferences are valid, and why? Neither the answer to this question, nor even how to go about answering it, is at all obvious. Logic is a theoretical subject, in the sense that to answer this question one has to construct a theory, to be tested by the usual canons of theoretical adequacy. And what other notions the theory may take into its sweeptruth, meaning or wot notis part of the very problem.
1.3 Validity: a First Pass
How, then, is this central question to be answered? Doubtlessly, a valid inference is one where the premises provide some genuine ground for the conclusion. But what does that mean? Traditionally, logic has distinguished between two notions of validity: deductive and non-deductive (inductive). A valid deductive argument is one where, in some sense, the conclusion cannot but be true, given the premises; a valid inductive argument is one where there is some lesser degree of support. Standard examples illustrate the distinction clearly enough. One might well ask why the notion of validity falls apart in this way, and what the relationship is between the two parts. I will come back to the whole issue of inductive inference later in the essay, and give a uniform account of validity, both deductive and inductive. For the present, let us simply accept the distinction between the two notions of validity as a given, and focus on deductive validity.
2 Deductive Validity
2.1 Proof-Theoretic Characterisation
What, then, is a deductively valid inference? Modern logic standardly gives two, very different, sorts of answer to this question: proof-theoretic and model-theoretic (semantic). In the proof-theoretic answer, one specifies some basic rules of inference syntactically. A valid inference is then one that can be obtained by chaining together, in some syntactically characterisable fashion, any of the basic rules. The whole process might take the form of a Gentzen system, a system of natural deduction, or even (God help us) an axiom system.
Such a characterisation may undoubtedly be very useful. But as an answer to the main question it is of limited use, for several reasons. The first is that there seem to be languages for which the notion of deductive validity is provably uncharacterisable in this way. Second-order logic is the obvious example. This has no complete proof-theoretic characterisation. Given certain assumptions, the same is true of intuitionistic logic too.
Possibly in these casesespecially the intuitionistic oneone might simply reject the semantic notion of validity with respect to which the proof-theory is incomplete. But even if one does this, there is a more profound reason why a proof-theoretic characterisation is unsatisfactory as an ultimate characterisation of validity. We can clearly give any number of systems of rules. Some may have nothing to do with logic at all; and those that do may give different answers to the question of which inferences are valid. The crucial question is: which system of rules is the right one? The natural answer at this point is to say that the rules are those which hold in virtue of the meanings of certain notions that occur in the premises and conclusions. This appears to take us into the second characterisation of validity, the semantic one. Some independent account of those meanings is given, and the appropriate proof-theory must answer to the semantics by way of a suitable soundness proof (and perhaps, also, completeness proof). And indeed, I think this way of proceeding is correct.
One may, however, resist this move for a while. One may suggest that it is a mistake to understand meaning in some independent way, but that the rules themselves specify the meanings of certain crucial notions involved. For example, one might say that the introduction and elimination rules for a connective in a system of natural deduction, specify its meaning. The problem with this was pointed out by Prior (1960). One cannot claim that an arbitrary set of rules specifies the meaning of a connective. Suppose, for example, that one could characterise a connective, * (tonk), by the rules: a a * b and a * b b. Then everything would follow from everythinghardly a satisfactory outcome.
Some constraints must therefore be put on what rules are acceptable. One might attempt some purely syntactic constraint. For example, it has been suggested that the rules in question must give a conservative extension. This, however, will not solve the problem. Conservativeness is always relative to some underlying proof theory. For example, adding the rules for Boolean negation to a complete proof theory for positive classical logic is conservative; adding them to one for positive intuitionist logic is not. One needs, therefore, at the very least, to justify the underlying proof theory. And this cannot be done by conservativeness, at least indefinitely, on pain of infinite regress.
One way in which it might be thought possible to avoid this regress is explained by Sundholm 1986, p. 485ff. Suppose that we are working in a natural deduction system, and suppose that we take the introduction rule for a connective to provide a direct account of its meaning. This needs no justification: any introduction rule may serve to do this. The corresponding elimination rule is then justified by the fact that it is conservative with respect to the introduction rule. In the words of Dummett, one of the people to whom this idea is due, the introduction and elimination rules are in harmony. The idea can be cashed out in terms of a suitable normal-form theorem: whenever we have an introduction rule followed by the corresponding elimination rule, both can be eliminated.
The regress is not eliminated, however. For the introduction and elimination rules are superimposed on structural inferential rules; for example, the transitivity of deducibility (deductions may be chained together to make longer deductions). Such structural rules are not inevitable, and the question therefore arises as to how these rules are to be justified. This becomes patently obvious if the proof-theory is formulated as a Gentzen system, where the structural rules are quite explicit, and for which there is now a well-advanced study of logics with different structural rules: sub-structural logics. One needs to justify which structural rules one accepts (and which one does not), and there is no evident purely proof-theoretic way of doing this.
If, as the forgoing discussion suggests, one cannot justify every feature of a proof-theory syntactically, the only other possibility would seem to be some semantic constraint to which the rules must answer. We are thrown back to the other kind of characterisation of validity, the model-theoretic one. So let us turn to this.
2.2 Model-Theoretic Characterisation
A deductively valid inference is, we said, one where the premises cannot be true without the conclusion also being true. A crucial question here is how to understand the cannot. What notion of impossibilityand, correlatively, of necessityis being appealed to here?
Modern logic has produced a very particular but very general way of understanding this. When we reason, we reason about many different situations; some are actualwhat things are like at the centre of the sun; some are merely possiblewhat things would have been like had the Axis won the second world war; and maybe even some that are impossiblewhat things would be like if, per impossibile, someone squared the circle. We also have a notion of what it is to hold in, or be true in, a situation. In talking of validity, necessity is to be explicated in terms of holding in all situations. Let us use lower case Greek letters for premises and conclusions, upper case Greeks for sets thereof, and write to indicate valid inference. Then we may define P k as:
for every situation in which all the members of P hold, k holds
There is also a corresponding notion of logical truth: k is logically true iff it holds in all situations.
So far so good. But what is a situation, and what is it to be true in it? One could, I suppose, take it that these notions are indefinable, but this is not likely to get us very far; nor is it the characteristic way of modern logic. Using mathematical techniques, both notions are normally defined. A situation is taken to be a mathematical structure of a certain kind, and holding in it is defined as a relationship between truth-bearers and structures. Both structures and relation are normally defined set-theoretically.
Thus, for example, in the case of the standard account of validity for (classical) first order logic (without free variables), a structure is a pair:
A = D, I (Q)
where D is the non-empty domain of quantification, and I is the denotation function. Truth in A is defined in the usual recursive fashion. Or in a Kripke semantics for modal logics, a structure is a 4-tuple:
A = W, g, R, I (R)
where W is a set of worlds, g is a distinguished member of W (the base world), R is a binary relation on W satisfying certain properties (to be employed in stating the truth conditions of ), and I is the denotation function, which assigns each atomic formula a subset of W (namely, those worlds at which it is true). Truth at a world w W is again defined in a recursive fashion. And truth in A is defined as truth at g.
In any semantics of this kind, the recursive truth conditions for a connective or quantifier can be thought of as spelling out its meaning, thus providing something against which a proof-theoretic rule employing that notion may be judged.
Once the notions of a structure and holding-in are made precise, the definition of validity can be spelled out exactly. Let us write A a to mean that a holds in structure A. We may also say that A is a model of a. A is a model of a set of truth bearers if it is a model of every member of the set. Then a is a logical truth iff every structure is a model of a. And:
P k iff every model of P is a model of k (DV)
I take this to be the best answer to the question of when an inference is deductive validity presently on offer. Note, though, that a structure is a set-theoretic entity, and is a set-theoretic relation. Thus, (DV) is a statement of mathematics. Strictly speaking, then, we have not given a final account of what it is for an inference to be valid; we have reduced the matter to that of the truth of a certain mathematical sentence. We may well ask the question of what it is for such a statementor any mathematical statementto be true. This is a profound question, but is far too hard to address here. One problem at a time!
1.3 Filling in the Details
(DV) is, in fact, only the form of a definition of validity. It leaves many details to be filled in. These depend, for a start, on the language employed in formulating the premises and conclusions. More importantly, the details cannot be filled in without resolving philosophical issues of a very substantial kind. This is not the place to go into these, but let me just point out some aspects of the process.
A major question to be answered is: how, exactly, are situations structured? This question cannot be divorced from that of how to define the relation of being true in a structure. For example, should the truth conditions of the conditional employ an ordering relation on worlds, as occurs in Kripke semantics for intuitionist logic, a ternary relation, as occurs in many relevant logics, or none of these, as in classical logic? If either of the first two of these is correct, then the relation will have to be a part of the structure. Another example: assuming that the truth/falsity conditions for a predicate are to be given in terms of its extension (those things it is true of) and anti-extension (those things it is false of), are we to suppose that these are exclusive and exhaustive of the relevant domains (as classical logic assumes) or not (as more liberal logics may allow)? Issues of the above kind pose deep metaphysical/semantical issues of a highly contentious nature.
Many of the relevant considerations here are familiar from the literature debating intuitionist, paraconsistent, and other non-classical logics. Theoretical issues concerning meaning, truth, and many other notions, certainly enter the debate. There is also a question of adequacy to the data. We have intuitions about the validity of particular inferences. (We may well have intuitions about the validity of various forms of inference as well, though because of the universality implicit in these, they are much less reliable.) These act like the data in an empirical science: if the theory gives the wrong results about them, this is a black mark against it. But as with all theorisation, the fact that a theory has desirable theoretical properties (e.g., simplicity, non-ad-hocness), may well cast doubt on any data that goes against itespecially if we can explain how we come to be mistaken about the data. (We will see an example of this concerning enthymemes a little later.) The dialectical juggling of theory against data is always a matter of good judgment (which is not to say that all judgments are equally good), and always fallible.
The situation is, in fact, even more prone to dispute than I have so far said. This is because (DV) itself is couched in terms whose behaviour is theory-laden, such as the logical constant every; and if one parses the restricted quantifier every A is B in the usual way as everything is such that if it is an A it is a B, then the conditional is getting in on the act too. The meanings of such notions, especially if, are philosophically contentious. An orthodox course is to take the conditional to be a material one. But if one takes the material conditional to be non-detachable, as do most relevant and paraconsistent logics, then this will hardly appeal. It should be a genuine and detachable conditional.
Another way of looking at the matter is this: it is not just the definition (DV) itself that is at issue, but what does and what does not follow from it. This depends on the behaviour of the logical constants; that is, the valid principles that such constants satisfy; which is part of what is at issue in an account of validity; which is what (DV) gives. The issue is therefore a circular one. This does not mean that it is impossible to come up with a solution to the whole set of matters. It just means that there is no privileged point of entry: we are going to have to proceed by boot-strapping. Certainly, one can do this with classical logic; one can equally well do it with intuitionist logic or a paraconsistent logic. In the end, we want a theory that, as a total package, comes out best under reflective equilibrium. There is no short way with this.
1.4 The Tarskian Account
Before we leave deductive validity and turn to inductive validity, it will be illuminating to compare the account of validity I have given with the celebrated account given by Tarski in his essay (1936). In a nutshell, this is as follows. Certain words of the language are designated logical constants. Given a sentence, its form is the result of replacing each non-logical-constant with a parameter (variable). An interpretation is a function that assigns each parameter a denotation of the appropriate type (objects for names, extensions for predicates, etc.). A relationship of satisfaction is defined between interpretations and forms, standardly by recursion. An inference is valid iff every interpretation that satisfies the form of each premise satisfies the form of the conclusion. (Correspondingly, a sentence is logically true if its form is satisfied by all interpretations.)
If we identify interpretations with what I have been calling structures, and write the satisfaction relationship as , then the form of the Tarskian definition of validity is exactly (DV). It is therefore tempting to think of the two accounts as the same. And indeed they may, in some cases, amount to the same thing; but as accounts, they are quite distinct. For a start, an interpretation, as employed in the definition, is normally only part of a structure, e.g., the I of (Q) or (K). In all but the simplest cases, structures carry more information than that, e.g., the domain, D, in (Q), and the binary relation, R, in (K). This will, in general, make an important difference as to what inferences are vaild. For example, consider the sentence $x$yxʭy. At least as standardly understood, this contains only logical constants. According to the Tarskian account it is therefore either logically true or logically false. In fact, it is the former, since it is true (simpliciter). But given standard model theory, it holds in some structures and not others, hence it is not logically true.
In (1990) Etchemendy provides an important critique of the Tarskian account. It will be further illuminating to see to what extent the account offered here is subject to the same problems. Etchemendy provides two sorts of counter-examples to the Tarskian account. These concern under-generation and over-generation.
According to the Tarskian account, any valid inference is, by definition, formally valid; that is, any inference of the same form is valid. This seems to render certain intuitively valid inferences invalid. For example, given the usual understanding of logical form:
This is red;
hence this is coloured
is invalid, since its form is:
x is P;
hence x is Q
which has invalid instances.
This argument is not conclusive. One may simply agree that the original inference is invalid, but explain the counter-intuitiveness of this by pointing out that the inference is an enthymeme. It is an instance of the valid form:
x is P;
everything that is P is Q;
hence x is Q
and the instance of the suppressed premise here is everything that is red is coloured, which is obviously true.
I shall not discuss the adequacy of this move here. I wish only to use the example to contrast the account of validity given here with the Tarskian one. For, modulo an appropriate account of logical form, the present account may, but need not, make validity a formal matter. This just depends on what structures there are. In the case at point, for example, there may be no structures where there is something in the extension of red that is not in the extension of coloured. We might, for example, eliminate such structures from a more general class with meaning postulates, in the fashion of Carnap and Montague. If this is the case, the inference in question is valid, though not formally so. Of course, this path is not pursued in standard model theory, where the more general notion of structure is employed; the result of this is that the notion of validity produced is a formal one. If one takes this line, then the model-theoretic account of validity also has to employ the enthymematic strategy.
Etchemendys second sort of counter-example to the Tarskian approach concerns over-generation. In such examples, Tarskis account gives as valid, inferences that are not so; or if perchance this does not happen, this is so only by luck. Consider the sentence: there are at most two cats: "x"y"z((Cx Cy Cz) (x = y y = z z = x)). Provided that there are at least three objects in toto, this is not a logical truth; but if not, it is. In this case, presumably, the account gets the answer right. But if the universe is finite, with, say, 101010 objects, the account is going to give the wrong answer for: there are at most 101010 cats. Moreover, whether or not something is a logical truth ought not to depend on such accidental things as the size of the universe.
One may meet Etchemendys criticism by pointing out that the totality of all objects is not restricted to the totality of all physical (actual) objects, but comprises all objects, including all mathematical objects, all possible, and maybe all impossible, objects too. Not only is this so large as to make every statement of the form there are at most i cats (where i is any size, finite or infinite) not a logical truth; but this result does not arise because of some lucky contingency. There is nothing contingent about the totality of all objects.
An objection similar to Etchemendys might be made against the account of validity offered here. An inference is valid if it is truth preserving in all situations. Couldnt this give the wrong answer if there are not enough situations to go around; and even if there are, should such a contingency determine validity? The answer to this is the same, and even more evident. The totality of situations is the totality of all situations: actual, possible, and maybe impossible too. It doesnt make sense to suppose that there might not be enough. Nor is the result contingent in any way. The totality of all situations is no contingent totality.
But he point may be pressed. The official definition is given, not in terms of situations, but in terms of the mathematical structures that represent them. Might there not be enough structures to go around. One might doubt this. Structures need not be pure sets. Any situation is made up of components; and it is natural to suppose that these can be employed to construct an isomorphic set-theoretic structure. Yet the worry is a real one. One of the situations we reason about is the situation concerning sets. Yet in Zermelo Fraenkel set-theory, there is no structure with the totality of sets as domain. Hence this set-theory can not represent that situation. This does not show that (DV) is wrong, however. It merely shows that ZF is an inadequate vehicle for representation. A set theory with at least a universal set is required. There are may other reasons for being dissatisfied with ZF as a most general account of set. See Priest 1987, ch. 2.
3 Inductive Validity
3.1 Form vs. Content
We have seen that the model-theoretic account of validity that I have offered is different from the Tarskian account. But, so far, we have not seen any definitive reason to prefer it to that account. The model-theoretic account is more powerful and more flexible, for sure, but as long as the enthymematic move is acceptable, we have seen no cases where this extra power and flexibility must be used. However, an argument for this may be found by looking at the question of inductive validity. In due course, I will also give a model-theoretic definition of inductive validity. But let us approach these issues via a different question.
Theories of deductive validity took off with Aristotle, and are now highly articulated. Theorisation about inductive validity is, by comparison, completely under-developed. Why? A standard answer, with a certain plausibility, is as follows. Deductive validity is a purely formal matter. Hence it is relatively easy to apply syntactic methods to the issue. By contrast, if an inference is inductively valid, this is not due to its form, but to the contents of the claims involved, a matter which is susceptible to no such simple method.
The issue is, however, not that straightforward. For a start, as we have seen, it may not be the case that deductive validity is a matter of form. That just depends on how other details of the account pan out. Moreover, it is not immediately clear that inductive validity is not a matter of form either. The inference:
x is P;
most Ps are Qs;
hence x is Q
is a pretty good candidate for a valid inductive form. (It is certainly not deductively valid.) It is true that an inference such as:
Abdul lives in Kuwait;
hence Abdul is a Moslem
is frequently cited as a valid inductive inference, and one that is not formally valid on the usual understanding of what the logical constants are. But it is not at all clear that this is so. Just as in the deductive case, we may take it to be an enthymeme of the above form with suppressed premise: most people who live in Kuwait are Moslems.
Despite this, inductive validity is not, in fact, always a formal matter. This, I take it, is one of the lessons of Goodmans new riddle of induction. Consider an inference of the following form (plausibly, one for enumerative induction):
Ea1 Ga1, ..., Ean Gan /"x(Ex Gx)
If E is is an emerald and G is is green, this inference seems quite valid. If, on the other hand, G is is grue (that is, a predicate which, before some (future) time, t, is truly applicable to green things, and truly applicable to blue things thereafter), it is not. It is well known that there is no syntactic way of distinguishing between the two inferences. For though grue is, intuitively, a defined predicate, green can be defined in terms of grue and bleen (a predicate which, before t, is truly applicable to blue things and truly applicable to green things thereafter). Hence any syntactic construction may be dualised. What breaks the symmetry is that emerald and green are natural-kind terms (are projectible, in Goodmans terminology), whereas grue is not. But there is no syntactic characterisation of this. Hence, inductive validity is not, in general, a formal matter. It follows, then, that no version of the Tarskian account of validity is going to be applicable to inductive validity. For as we have seen, Tarskian validity is formal validity.
3.2 Probability
How, then, are we to get a grip on the notion of inductive validity? A natural suggestion is that we should appeal to a suitable notion of probability. (Probability assignments are not, except in a very few cases, a formal matter.) Now there certainly seem to be intimate links between inductive validity and probability, but it is not clear that one can use the notion to formulate a satisfactory theory of inductive validity.
Let us restrict ourselves, for simplicity, to the one-premise case, and let us write the conditional probability of a given b in the usual way, as p(a/b). Then a first suggestion for defining inductive validity is as follows: an inference from premise b to conclusion a is valid if b raises the probability of a, i.e., if p(a/b) > p(a). This will not do, however. Just consider a case where b raises the probability of a though this is still small, as in:
John used to be a boxer;
so John has had a broken nose
This seems quite invalid. (Most boxers have never had broken noses.)
A more plausible suggestion is that the inference is valid if p(a/b) is sufficiently highwhere this is to be cashed out in some suitable fashion. But this, too, has highly counter-intuitive results. Consider the case, for example, where b decreases the probability of a, even though the conditional probability is still high, as in:
John used be a boxer;
so John has not had his nose broken
This inference is of dubious validity.
Could an appeal to Bayeseanism solve the problem at this point? According to Bayeseans, the relevance of a premise, b, is simply that if we learnor were to learnthat b, we (would) revise our evaluation of the probability of any statement, a, to p(a/b), where p is our current probability function. I do not wish to deny that we sometimes revise in this way, but this cannot provide a satisfactory account of inductive inference for two reasons. First, if there is nothing more to the story than this, it is tantamount to giving up on the notion of inductive inference altogether. Inference is concerned with the question of when, given certain premises, it is reasonable to accept certain conclusions. In other words, we want to be able to detach the conclusion of the argument, given the premises. Conditionalisation does not, on its own, give an answer to the question of when this is possible. More conclusively, even Bayeseans concede that there is information that cannot be conditionalised upon. For example, if a is anything such that p(a) = 1, and if the probability function that will result from our next revision is q, then we cannot conditionalise coherently on, e.g., q(a) 0.5. For the result of conditionalisation is q(a) = 1 (still). Yet information of this form can certainly be the premise of an inductive argument; for example, one whose conclusion is that q(a) < 0.5. Hence inference outstrips conditionalisation.
Even if there were some way of analysing inductive validity satisfactorily in terms of probability, a more fundamental problem awaits us. Deductive validity and inductive validity would seem to have something to do with each other. They are both species of validity. If deductive validity is to be analysed in terms of preservation of truth in a structure, and inductive validity is to be analysed as something to do with probability, they would seem to be as different as chalk from cheese.
Maybe there is some deeper connection here, but it is not at all obvious what this could be. One might try defining a deductively valid inference, b/a, as one where p(a/b) = 1. This has nothing to do with model theory, but one might hope that, by making suitable connections (for example, by considering a probability measure on the space of models), the definition could be shown to be equivalent to (DV). One might even jettison the model-theoretic definition of deductive validity entirely, and attempt a uniform account of validity in terms of probability. Such moves face further problems, however. For example, consider:
John chose a natural number at random;
hence John did not choose 173
On the usual understanding of probability theory, this satisfies the probabilistic account of deductive validity, but it is hardly deductively valid: the conclusion might turn out to be false. There are ways that one might try to get around this problem too, but is would certainly seem much more satisfactory if an account of inductive validity could be found which made the connection with deductive validity obvious, and which did not did not depend upon probabilistic jiggery-pokery.
3.3 Non-Monotonic Logic
Such an account is now available, thanks to recent developments in non-monotonic logic. An inference relation, , is monotonic if P k entails P ( S k. Deductively valid inferences are monotonic. For suppose that P k. If all the premises in P ( S hold in a structure, then certainly all those in P do. In which case, so does k. On the other hand, inferences traditionally accounted inductively valid are well known not to be monotonic. Consider only:
Abdul lives in Kuwait;
Abdul went to mass last Sunday;
hence Abdul is a Moslem
The study of non-monotonic inferences, quite independently of probability theory, is one that has seen rapid and exciting developments in the last 15 years, mainly from amongst logicians in computer science departments. There are many distinctive approaches to non-monotonic inference, of various degrees of mathematical sophistication. But it is now becoming clear that at the core of theories of non-monotonicity there is a canonical construction.
Let me illustrate. Consider Abdul again. What makes it plausible to infer that he is a Muslim, given only that he lives in Kuwait, is that, if he were not, he would be rather abnormal. (Qua inhabitant of Kuwait, and not, of course, in any evaluative sense.) Although the inference may not be truth-preserving, it is certainly truth-preserving in all normal situations.
We can formulate this more precisely as follows. Let us suppose that we can compare situations (or the structures that represent them) with respect to their normality. Normality, of course, comes by degrees. Let us write the comparison as A > B (A is more normal than B). > is certainly a partial order (transitive and antisymmetric); but it is not a linear order. There is no guarantee that we can compare any two structures with respect to their normality. Let us say that A is a most normal model of a set of sentences, P, A n P, iff:
A P and for all B such that B > A, not B P
Then if we write inductive validity as i, we can define:
P i k iff every most normal model of P is a model of k (IV)
This captures the idea that an inference is inductively valid if the conclusion holds in all models of P that are as normal as P will let them be. Notice how this gives rise to non-monotonicity in a very natural way. k may well be true in all situations most normal with respect to P, but throw in S, and the most normal situations may be more abnormal, and not necessarily ones where k holds.
3.4 Consequences of this Account
The definition (IV) is the one I wish to offer. It is exactly the same as (DV), except that where (DV) has model (IV) has most normal model. The connection between the two notions is therefore patent. Where deductive validity requires truth preservation over all structures, inductive validity requires truth preservation over all normal structures (with respect to the premise set). Deductive validity is the same as inductive validity except that we dont care about normality. This is equivalent to taking > to be the minimal ordering that relates nothing to anything. We therefore have a generic account of validity. Validity is truth-preservation in all structures of a certain kind. Deductive validity is the limit case where we are talking about all structures, period. Otherwise put: validity is about truth preservation in all normal structures. Deductive validity is the limit case where every situation is to be considered normal.
This account of inductive validity extends the model-theoretic account of deductive validity. Hence, all the comments that I made about filling in its details carry over to it, also. In this respect they are the same. The major technical difference between the two notions, is the fact that inductive validity makes use of the ordering >, whilst deductive validity does not. This, therefore, focuses the major conceptual difference between the two. Although the question of what structures there are is open to theoretical debate, whatever they are, they are invariant across all contexts in which we reason. The definition (IV) also gives us a core of universally inductively valid inferences, namely those that hold for all orderings. But when we employ the notion of inductive validity in practice, we do not argue relative to all orderings, but with respect to one particular ordering. The ordering is not a priori, but is fixed by external factors, such as the world and the context. For example, the fact that to be green is more normal (natural) than to be grue is determined by nature. Or what is to count as normal may depend on our interests. In the context of discussing a religious question, a resident of Kuwait who goes to mass is not normal. In the context of discussing biological appendages, such as noses, they are (normally) quite normal.
This is, I think, the heart of the difference that people have sensed between inductive and deductive validity. It is not really a matter of the difference between form and contentor if it is, what this comes down to is exactly the dependence of inductive validity on an a posteriori feature, the ordering >.
4 Conclusion: Validity and Truth-Preservation
4.1 Normativity
Let me conclude by tying up some loose ends; and let us start by returning to the question of the normativity of logic. Validly, I said, is how people ought to reason. Why? The answer is simple. We reason about all kinds of situations. We want to know what sorts of things hold in them, given that we know other things; or what sorts of things dont hold, given that we know other things that dont. If we reason validly then, by the definition of validity, we can be assured that reasoning forward preserves the first property, and that reasoning backwards preserves the second. Validly is how one ought to reason if one wants to achieve these goals. The obligation is, then, hypothetical rather than categorical.
In less enlightened days, I argued against a model-theoretic account of validity of the kind given here. One argument was to the effect that a model-theoretic account validates the inference ex contradictione quodlibet: a, a b, which is not a norm of reasoning. This argument is just fallacious. A model-theoretic account may validate this argument; but equally, it may not. It depends, crucially, on what situations there are. If there are non-trivial but inconsistent situations, as there are in many logics, this will not be the case.
A second argument was to the effect that a model-theoretic account cannot be right since, if it were, to know that an inference is valid we would have to know, in advance of using it, that it preserves truth in all situations; and hence it would be impossible for us to learn something new about a situation by applying a valid inference. This argument is equally, though less obviously, invalid. It confuses matters definitional with matters epistemological. We may well, and in fact do, have ways of knowing that a particular inference is valid, other than that it simply satisfies the definition. Particular cases are usually more certain than general truths. Compare: The Church/Turing Thesis provides, in effect, a definition of algorithmaticity. But we can often know that a process is algorithmic without having to write a program for a Turing machine. If we could not, the Thesis would not be refutable, which it certainly is.
4.2 Information-Preservation
Further on the subject of truth-preservation: it is sometimes objected that a definition of validity on the basis of truth preservation is too weak. What we often need of a notion of validity is that it preserve not truth, but something else, like information. Consider an inference engine for a computational data base, for example. We want one which extracts the information that is implicit in the data. Truth has nothing to do with it.
This argument is also far too swift. Valid inferences preserve not just truth, but truth in a structure. Given the right way of setting things up, the situation as described in the data base may well be (represented by) an appropriate structure. Structures do not have to be large like classical possible worlds, but may be small, as in situation-semantics. (In particular, the set of things true in a situation may be both inconsistent and complete.) A valid inference is, then, by definition, just what we want to extract the juice from the information provided by the data-base. Provided we choose our situations carefully, a logic of truth preservation is also a logic of information preservation.
4.3 A Final Thought
This last claim is, of course, just a promissory note, and needs to be redeemed by a lot of hard work, specifying all the details that have only been hinted at above. The aim of this paper has not been to present all the details of an account of validity. That is the work of several lifetimes. The aim has just been to provide the general form of an answer to the question of what validity is. To this extent, the slack in the definition is to its advantage. The details are to be filled in, in the most appropriate way. But despite the slack, the form of the answer is far from vacuous. There are certainly other possibilities (some of which we have briefly traversed); and the fact that this answer is possible at all, is a tribute to the developments in modern model theoryof both classical and non-classical logicsof which this account can be thought of as the distilled essence.
Notes
References
Belnap N., 1962, Tonk, Plonk and Plink, Analysis 22, 130-4; reprinted in Strawson (ed.) 1967.
Carnap R., 1952, Meaning Postulates, Philosophical Studies 3, 65-73.
Crocco G., L. Farias del Cerro, and A. Herzig (eds.), 1995, Conditionals: from Philosophy to Computer Science, Oxford: Oxford University Press.
Devlin K., 1991, Logic and Information, Cambridge: Cambridge University Press.
Etchemendy J., 1988, Tarski on Truth and Logical Consequence, Journal of Symbolic Logic 53, 51-79.
Etchemendy J., 1990, The Concept of Logical Consequence, Cambridge, MA: Harvard University Press.
Gabbay D., 1995, Conditional Implications and Non-monotonic Consequence, ch. 11 of Crocco et al. (eds.), 1995.
Goodman N., 1979, Fact, Fiction and Forecast, Cambridge, MA: Harvard University Press.
Howson C. and P. Urbach, 1993, Scientific Reasoning: the Bayesan Approach, 2nd ed., La Salle, IL: Open Court.
Katsuno H. and K. Satch, 1995, A Unified View of Consequence Relation, Belief Revision and Conditional Logic, ch. 3 of Crocco et al. (eds.), 1995.
Kraus S., D. Lehmann, and M. Magidor, 1990, Nonmonotonic Reasoning, Preferential Models and Cumulative Logics, Artificial Intelligence 44, 167-207.
Mares E. , 1996, Relevant Logic and the Theory of Information, Synthese 109, 345-60.
McCarty D., 1991, Incompleteness in Intuitionist Mathematics, Notre Dame Journal of Formal Logic, 32 323-58
Montague R., 1974, Formal Philosophy, New Haven and London: Yale University Press.
Mortensen C., 1981, A Plea for Model Theory, Philosophical Quarterly 31, 152-7.
Priest G., 1979, Two Dogmas of Quineanism, Philosophical Review 29, 289-301
Priest G., 1987, In Contradiction, Boston and Dordrecht: Nijhoff.
Priest G., 1990, Boolean Negation and All That, Journal of Philosophical Logic 19, 201-15.
Priest G., 1991a, The Nature of Philosophy and Its Place in a University, University of Queensland Press
Priest G., 1991b, Minimally Inconsistent LP, Studia Logica 50, 321-31
Priest G., 1995, Etchemendy and Logical Consequence, Canadian Journal of Philosophy 25, 283-92
Priest G., 199+, On Alternative Geometries, Arithmetics and Logics; a tribute to Lukasiewicz, Proceedings of the Conference Lukasiewicz in Dublin, to appear
Prior A., 1960, The Runabout Inference Ticket, Analysis 20, 38-9; reprinted in Strawson (ed.), 1967.
Schrder-Heister P. and K. Dosen (eds.), 1993, Substructural Logics, Oxford: Clarendon Press.
Shoham Y., 1988, Reasoning about Change, Cambridge, MA: MIT Press.
Slaney J., 1990, A General Logic, Australasian Journal of Philosophy 68, 74-88
Stalnaker R., 1981, A Theory of Conditionals, in W. L. Harper (ed.), Ifs, Dordrecht: Reidel, pp. 41-55.
Stevenson J., 1961, Roundabout the Runabout Inference Ticket, Analysis 21, 124-8; reprinted in Strawson (ed.), 1967.
Strawson P. (ed.), 1967, Philosophical Logic, Oxford: Oxford University Press.
Sundholm G., 1986, Proof Theory and Meaning, ch. 8 of D. Gabbay and F. Guenthner (eds.), Handbook of Philosophical Logic, Vol. III: Alternatives in Classical Logic, Dordrecht: Kluwer Academic Publishers.
Tarski A., 1936, O pojciu wynikania logiczneg, Przegl EQ \o(,a)d Filozoficzny 39, 58-68; Eng. trans. by J. H. Woodger: On the Concept of Logical Consequence, in A. Tarski, Logics, Semantics, Metamathematics, Papers from 1923 to 1938, Oxford: Clarendon Press, 1956 (2nd edition ed. by J. Corcoran, Indianapolis: Hackett, 1983), pp. 409-420.
Tennant N., 1987, Anti-Realism and Logic, Oxford: Oxford University Press.
Wason P. C. and P. Johnson-Laird, 1972, Psychology of Reasoning: Structure and Content, Cambridge, MA: Harvard University Press.
Department of Philosophy
University of Queensland
Brisbane, Australia
PAGE 2
Even whenespecially whenthe field is philosophy itself. See Priest 1991a.
See, e.g., Wason and Johnson-Laird 1972, esp. the places indexed under fallacies.
There are, of course, other questions (such as, e.g., the nature of fallacies). But these all make refence back to the central question.
See McCarty 1991.
E.g., by Belnap 1962.
There are other problems with applying the notion of conservativeness here. What conservatively extends what may depend on the (apparently irrelevant) fact of the order in which rules are added on. See Priest 1990.
For full references, see Sundholm 1986.
For example, there is harmony, but not transitivity in the logic of Tennant 1987.
See Schroeder-Heister and Dosen 1993, and also Slaney 1990.
In the context of tonk, this was suggested by Stevenson 1961.
I shall not be concerned in this paper with what sort of thing premises and conclusions are, sentences, statements, propositions, or wot not. As far as this paper goes, they can be anything as long as they are truth-bearers, that is, the (primary) kind of things that may be true or false.
No one would suppose that situations are mathematical entities, such as orderd n-tuplesat the very least, in the case of actual situations. Strictly speaking, then, set-theoretic structures represent situations. They do this, presumably, because the situations have a structure (or, at least, a pertinent structure) that is isomorphic to that of the mathematical structure. How to understand this is a question in the philosophy of mathematics. Indeed, how to understand this sort of question is the central question of the nature of applied mathematics, and one I\ cannot pursue here.
Let me, in passing, note that being true in is quite distinct from being true (simpliciter); for these two notions are frequently confused. The latter notion is a property, or at least, a monadic predicate, and has nothing, in general, to do with sets. One might be interested in it for all kinds of reasons, which it is unnecessary to labour. The second is a relation, and a set theoretic one, at that; and the only reason that one might be interested in it is that it is a notion necessary for framing an account of validity. The two notions are not, of course, entirely unrelated. One reason we are interested in valid inferences is that we can depend on them to preserve truth, actual truth. Hence it is a desideratum of the notion of truth-in-a-structure that there be a structure, call it the actual structure, such that truth (period) coincides with truth in it. The result is then guaranteed. No doubt this imposes constraints on what ones account of structure should be, and on how the truth-in relation should behave at the actual structure. But one should not suppose that just because the actual structure possesses certain features (such as, for example, consistency or completeness), other structures must share those features: we reason about many things other than actuality. Similarly, recursive truth conditions may collapse to a particularly simple form at the actual structure because of certain privileged properties. But that is no reason to think that they must so collapse at all structures.
It is sometimes said that there is no determinate answer to the question of whether or not an inference is (deductively) valid. An inference may be valid in one semantics (proof theory, logic, system, etc.) but not another. Maybe this is just a way of saying that it is valid according to some particular theories or accounts of validity but not others, in which case it is unproblematic. But it is, I think, often meant as stronger than this: there is no fact of the matter as to which theory gets matters right. And if it means this, it would certainly appear to be false. Either it is true that Socrates had two siblings gives a (conclusive) ground for Socrates had at least one sibling or it is not. This fact is not relative to anythingunless one is a relativist about truth itself.
The notion of set employed in (DV) may also be up for grabs. The nature of work-a-day sets, such as the null set and the set of integers, may not be problematic; but the definition of validity concerns all structures of a certain kind; and the behaviour of such totalities is a hard issue. Even if you suppose that the totality of sets is exhausted by the cumulative hierarchy, how far up this extends is still mathematically moot, as is the question of whether we may form totalities that are not, strictly-speaking, sets. Throw in the possibility that there may be sets that are, e.g., non-well-founded, let alone inconsistent, and one starts to see the size of the issue.
For a further discussion concerning the rivalry of logical theories, see Priest 199+, section 10. Section 13 of that paper raises the question of whether or not one should be a realist about logic. This papers answers the question in the affirmative.
It should be noted that this is not the orthodox model-theoretic account, which occurs in Tarskis later writings. For a discussion of Tarskis views and their history, see Etchemendy 1988.
This fact is picked up by Etchemendy 1990 who points out, quite correctly, that the Tarskian account of validity has to be doctored by cross term restrictions for a more orthodox result. A slightly different way of doctoring it, by making the parametric nature of quantifiers explicit, is given in Priest 1995. Note that, because of typesetting errors, all the "s in that paper appear as or .
See Priest 1995.
See Carnap 1952, and Montague 1974, esp. p. 53 of Thomasons introduction. In fact, orthodox model theory, in effect, employs meaning-postulates for the logical constants. The recursive truth conditions select one denotation for each logical constant from amongst all the syntactically possible ones.
See Priest 1995, pp. 289-91. Etchemendy has some other examples of over-generation, but these are less persuasive, ibid.
As explained, for example, in ch. 3 of Goodman 1979.
It might be thought that what is causing the problem here is that we are trying to make inductive validity an all-or-nothing matter, when it is really a matter of degree. Hence, we may simply take the degree of validity of the inference b/a to be p(a/b). But this does not seem to help. According to this account, the inference is still an inductive inference of high degree of validity, which seems odd.
It might be replied that we never simply accept things: we always accept things to a certain degree. The question of detachment does not, therefore, arise. But this is just false: acceptance may be a vague notion, but there are clear cases of things that I\ accept, simpliciter, for example that Brisbane is in Australia (though I\ would not give this unit probablity, for standard, fallibilist, reasons).
See, e.g., Howson and Urbach 1993, p. 99ff.
See, e.g., Shoham 1988, Katsuno and Satch 1995. For an application of the construction in a paraconsistent context, where normality is cashed out in terms of consistency, see Priest 1991b.
An additional condition, sometimes called the smoothness constraint, is often imposed on the ordering <: for any a and A, if A a then there is B>A such that B n a (see, e.g., Katsuno and Satch 1995, Gabbay 1995). This prevents finite sets of premises that are non-trivial under deduction from exploding under inductionobviously a desirable feature.
It is fair to ask what happens to the boxer examples of 3.2 on this account. Both, given reasonable assumptions, turn out to be invalid, since there are normal situations where boxers get their noses broken and normal situations where they do not.
A version of this is given a proof-theoretic characterisation in Kraus, Lehmann and Magidor 1990. The semantics include the smoothness constraint.
The ordering may even be subjective in a certain sense. For example, a natural thought is to take B>A to mean that situation A is less probable than situation B (assuming this to make sense), where the probability in question is a subjective one. An ordering relation plays an important role in the semantics of conditional logicswhich are, in fact, closely related to non-monotonic logics. (See, e.g., Gabbay 1995, Katsuno and Satch 1995.) The a posteriority of this is well recognised. See, e.g., Stalnaker 1981.
Priest 1979, p. 297.
The point is made by Mortensen 1981.
See Priest 1995, pp. 287f.
For example, the logic of information that Devlin gives in 1991, section 5.5, is exactly Dunns four-valued truth-preservational semantics of First Degree Entailment, in disguise. For further connections between model-theoretic semantics and information, see Mares 1996.
Talks based on a draft of this paper were given at Notre Dame University, the University of Indiana, and the conference Logica 97 in Prague. I am grateful to those present for many helpful comments. I am particularly grateful to Paddy Blanchette, Andr Fuhrmann, Colin Howson, David McCarty, Gren Sundholm and Achille Varzi, for discussions that helped me to see a number of things more clearly.
d2(A|HHE!eG(HH
_'@d2(A|HHE!eG(HH
_'@; SummaryInformation(X@Microsoft Word 6.0.12; qr )
*
n
o
u
v
OP'[
d
EF]^89*+tuEF 1234569:;?@CDEFG]V]]VnuDP^GH12kl L!M!""
######)$*$9$:$$$y%z%%%&&#($())))G+V+W,X,^,_,,,,,,,,,,,---A-B-4/5///00000 0!0"0$0%0-0.0^0_00000000011111]]]VuDPnV]]11 1
111(1)1G1H1]1^1w1x11111111122]2^2_2`2a2b2222222)4*4+4,4-4.4/4<4=4Q4R4i4j4p4u4y4z4|4}444445
55555555-5.5=5>5@5B5555566777777f8g8":#:::<<uDPV]]]nV]]<:=;===>>1@2@N@P@Q@T@@@AASATAAA$B&BBBBBBBBBC CCCCCDDEEFF_F`FFFGGGGGGHH.I?IWI[IIIIIWJYJZJcJJJKKALCLMMAMEM`MaMfMgMmMnMMMMMMMMMMMvNwN]]VnuDPVn^wNxNyN}N~NNTO_OOOOOOQPQZR[R_R`RhRiRmRnRRRqSrSvSwSSSSSSSSSTTTTTU2U3UVVVVWWEXFX*Y+Y,Y-Y.Y/Y0Y3Y4Y5Y6Y8Y9Y:Y;Y=Y>Y@YBYCYEYGYHYIYJYKYMYOYQYSYUYWYXYYY-Z/Z1ZZZZZce
ceuDPn]V]ZZ[[[[o\p\~\\$]%]8]9]z]{]\^]^^^O`P`sbubbb?cAc)e-eeegfmfggggdieizi{iiiiiiiiiiiiijj>l?lElFllllllllllllllllllllllllllll,m-msmtmnn-o.oVh]huDPVn^.oooNpOpqqqq-r.r+s,s3s4sJsKsLsMsNsOssssssssssssssssstttt?t@t[t\ttt:u;u__u?uuuuuuuvv v!vnvqvvvvv"w#wVwWwwwwwwwwwwwwwxxxxyyyy0z1zzzuDPVV]n`zzzzzzz{{O{P{Q{R{{{{{{{||||&|'|;|<|||||N}V}}~~~~~~~~~~~~~~~~~789:`afosuvwxz{Ƀ˃̃̓;<IJ܅݅Ɇ]]nVV]`Ɇʆ56ˈ̈-.1245JK +IJLNOPQRYZ[\]^klwx{|ÊĊŊƊǊȊIJfg~؋ً)*!V]Vh]]]VnuDP\Ѝэ17\] !jkϒђ03>?ޓߓʖdeUVÚȚʚ˚͚ΚϚКњҚӚ
]^BCij23JK"#֧קqΨ3H]]V]VnuDPVn]Hʩ
ժItڬ0Rq0DbrǮ:گfְް;O{ѱEH
ճ/56ABPȴɴ˴̴δϴشڴrӵ,-.tuv~uDPcUc
uDVVnnV\ٶڶ/0ηϷ?@}~23FGԼ./N[s~ֽڿ89Y[/0
'(./23]VnuDPcuDPcPc[3EFtu&' !"#$UV()BC45:;QRxy )NOVh]]V]VnuDP]67iuUcnC[QjU
N!]$T'((),
-u-/0'001235D577H,HH,H,H,H,H,H,
H,H,H,
H,H,H,H,H,)7o9I=ADGHKOQR$R;RZRbRoRRqSySSSTX[:]_|ccc#f;hziiii%jY^(x#"џL`اާߧJ"q֩8n^Q[ji0`3+`,-.G`tuvwxyz{|}~ٶ/η?}.82EHH33'Et&UB4:Qx567 K*@*Normal ]a c"A@"Default Paragraph Font *@ Endnote Referenceh @ Footer! @ Header!$&@!$Footnote Referencece*@2*
Footnote Text 3c&OB&
references
|*OR*Normal+],>@b,TitleVc,k$Or$Autore
Zc$O$Affiliation
d8O8Sub Section TitleVU2O2Sub-sub-sectionW<VONormal7,O, Reference\3c,O,Formula(+@(Endnote Text 3c)@Page Number7
quE1L y"##%^),"7>@DELQS8ZEis0wwɃ\>U
J֤7N/C[4^2VTQd2
!"##
###
!ߤ77U
"&D^^4"
+32=FoOX5a?iq9{&ߤߤߤߤ871 u ,
`P
s;qG1<wNZ.ozɆH3inopqrstuvwxyz{7l`E7|}~5A71!Christia MercerTHard Disk:Desktop Folder:Documents:Books in progress:2. Nature of Logic:2. Priest OKChristia MercerTHard Disk:Desktop Folder:Documents:Books in progress:2. Nature of Logic:2. Priest OKChristia MercerTHard Disk:Desktop Folder:Documents:Books in progress:2. Nature of Logic:2. Priest OKChristia MercerTHard Disk:Desktop Folder:Documents:Books in progress:2. Nature of Logic:2. Priest OKChristia MercerTHard Disk:Desktop Folder:Documents:Books in progress:2. Nature of Logic:2. Priest OKChristia MercerTHard Disk:Desktop Folder:Documents:Books in progress:2. Nature of Logic:2. Priest OKChristia MercerTHard Disk:Desktop Folder:Documents:Books in progress:2. Nature of Logic:2. Priest OKChristia MercerTMacintosh HD:Desktop Folder:Documents:Books in progress:3. Nature of Logic:2. PriestChristia MercerTMacintosh HD:Desktop Folder:Documents:Books in progress:3. Nature of Logic:2. PriestChristia MercerUMacintosh HD:Desktop Folder:Documents:Books in progress:1. Nature of Logic:8. Priest'@5qMTimes New RomanSymbolMArialMMath3MMath5MNew YorkMTimesMGeneva"8e##$+JWIDER STILL AND WIDER...Christia MercerChristia Mercer; uR F$_CompObj\WordDocumentObjectPoolˣXˣXw
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijkxyz{|}~SummaryInformation( x_@Microsoft Word 6.0.13; FMicrosoft Word 6.0 DocumentNB6WWord.Document.6;
Oh+'0+1=
IU]
eq ;G/V0L@z:Macintosh HD:Applications:Microsoft Word:Templates:NormalWIDER STILL AND WIDER...Christia MercerChristia Mercer'@X@v@U. Frank40)Abt. GeoinformationAndrew U. Frank41)Abt. GeoinformationAndrew U. Frank42)Abt. GeoinformationAndrew U. Frank43)Abt. GeoinformationAndrew U. Frank44)Abt. GeoinformationAndrew U. Frank45)Abt. GeoinformationAndrew U. Frank46)Abt. GeoinformationAndrew U. Frank47)Abt. GeoinformationAndrew U. Frank48)Abt. GeoinformationAndrew U. Frank59)Abt. GeoinformationAndrew U. Frank5:)Abt. GeoinformationAndrew U. Frank5;)Abt. GeoinformationAndrew U. Frank5<)Abt. GeoinformationAndrew U. Frank5=)Abt. GeoinformationAndrew U. Frank5>)Abt. GeoinformationAndrew U. Frank5_?)Abt. GeoinformationAndrew U. Frank5@)Abt. GeoinformationAndrew U. Frank;gA)Abt. GeoinformationAndrew U. Frank;hB1Dept. of GeoinformationAbt. GeoinformationsCUSERDavid RhindfDUSERDavid RhindfE#Dept. of GeoinformationFrank 2FSAN GERARDOGRoberto CasatiH%Christia MercerAndrew U. Frank#I5A71!Christia MercerTHard Disk:Desktop Folder:Documents:Books in progress:2. Nature of Logic:2. Priest OKChristia MercerTHard Disk:Desktop Folder:Documents:Books in progress:2. Nature of Logic:2. Priest OKChristia MercerTHard Disk:Desktop Folder:Documents:Books in progress:2. Nature of Logic:2. Priest OKChristia MercerTHard Disk:Desktop Folder:Documents:Books in progress:2. Nature of Logic:2. Priest OKChristia MercerTHard Disk:Desktop Folder:Documents:Books in progress:2. Nature of Logic:2. Priest OKChristia MercerTHard Disk:Desktop Folder:Documents:Books in progress:2. Nature of Logic:2. Priest OKChristia MercerTMacintosh HD:Desktop Folder:Documents:Books in progress:3. Nature of Logic:2. PriestChristia MercerTMacintosh HD:Desktop Folder:Documents:Books in progress:3. Nature of Logic:2. PriestChristia MercerUMacintosh HD:Desktop Folder:Documents:Books in progress:1. Nature of Logic:8. Priest'Christia MercerTMacintosh HD:Desktop Folder:Documents:Books in progress:1. Nature of Logic:9. Priest@ii&ii5dtuv67
i
u
v
j
6qMTimes New RomanSymbolMArialMMath3MMath5MNew YorkMTimesMGeneva"8e##$$+JWIDER STILL AND WIDER...Christia MercerChristia Mercer; ܥhO e7u#,,,0$b
b
b
b
b
~
X
(b
q
&XoJ=i n| =
d8(
Graham Priest
Validity
1 Introduction: Approaching the Problem
1.1 The Nature of Logic
Knowledge may well, in the last analysis, be a seamless web. Yet it certainly falls into relatively well-defined chunks: biology, history, mathematics, for example. Each of these fields has a nature of a certain kind; and to ask what that nature is, is a philosophical question. That question may well be informed by developments within the field, and conversely, may inform developments in that field; but however well that field is developed, the question remains an important one, and one that will pay revisiting. It is such a revisiting that I will undertake here.
The field in question is logic, one of the oldest areas of knowledge. The nature of this has been a live issue since the inception of the subject, and numerous, very different, answers have been given to the question what is logic?. To review the major answers that have been given to this question would be an important undertaking; but it is one that is too lengthy to be attempted here. What I do intend to do is to give the answer that I take to be correct. Even here, it is impossible to go into all details. Indeed, to do so one would have to solve virtually every problem in logic! What I will give is the basis of an answer. As we will see, there is enough here to keep us more than busy.
1.2 Focusing on Validity
What, then, is logic? Uncontroversially, logic is the study of reasoning. Not all the things that might fall under that rubric are logic, however. For example, the way that people actually reason may, in some profound sense, be part of the ultimate answer to the question of the nature of logic (think of Wittgenstein in the Investigations), but logic is not about the way that people actually think. The reason for this is simple: as a rich literatureif not common sensenow attests, people frequently reason illogically. Logic does not tell us how people do reason, but how they ought to reason. We will return to the question of the ought, here, later. For the present, let us cede the question of how people actually reason to psychology.
The study of reasoning, in the sense in which logic is interested, concerns the issue of what follows from what. Less cryptically, some thingscall them premisesprovide reasons for otherscall them conclusions. Thus, people may provide others with certain premises when they wish to persuade them of certain conclusions; or they may draw certain conclusions from premises that they themselves already believe. The relationship between premise and conclusion in each case is, colloquially, an argument, implication or inference. Logic is the investigation of that relationship. A good inference may be called a valid one. Hence, logic is, in a nutshell, the study of validity.
The central question of logic is, then: what inferences are valid, and why? Neither the answer to this question, nor even how to go about answering it, is at all obvious. Logic is a theoretical subject, in the sense that to answer this question one has to construct a theory, to be tested by the usual canons of theoretical adequacy. And what other notions the theory may take into its sweeptruth, meaning or wot notis part of the very problem.
1.3 Validity: a First Pass
How, then, is this central question to be a accept things to a certain degree. The question of detachment does not, therefore, arise. But this is just false: acceptance may be a vague notion, but there are clear cases of things that I\ accept, simpliciter, for example that Brisbane is in Australia (though I\ would not give this unit probablity, for standard, fallibilist, reasons).
See, e.g., Howson and Urbach 1993, p. 99ff.
See, e.g., Shoham 1988, Katsuno and Satch 1995. For an application of the construction in a paraconsistent context, where normality is cashed out in terms of consistency, see Priest 1991b.
An additional condition, sometimes called the smoothness constraint, is often imposed on the ordering <: for any a and A, if A a then there is B>A such that B n a (see, e.g., Katsuno and Satch 1995, Gabbay 1995). This prevents finite sets of premises that are non-trivial under deduction from exploding under inductionobviously a desirable feature.
It is fair to ask what happens to the boxer examples of 3.2 on this account. Both, given reasonable assumptions, turn out to be invalid, since there are normal situations where boxers get their noses broken and normal situations where they do not.
A version of this is given a proof-theoretic characterisation in Kraus, Lehmann and Magidor 1990. The semantics include the smoothness constraint.
The ordering may even be subjective in a certain sense. For example, a natural thought is to take B>A to mean that situation A is less probable than situation B (assuming this to make sense), where the probability in question is a subjective one. An ordering relation plays an important role in the semantics of conditional logicswhich are, in fact, closely related to non-monotonic logics. (See, e.g., Gabbay 1995, Katsuno and Satch 1995.) The a posteriority of this is well recognised. See, e.g., Stalnaker 1981.
Priest 1979, p. 297.
The point is made by Mortensen 1981.
See Priest 1995, pp. 287f.
For example, the logic of information that Devlin gives in 1991, section 5.5, is exactly Dunns four-valued truth-preservational semantics of First Degree Entailment, in disguise. For further connections between model-theoretic semantics and information, see Mares 1996.
Talks based on a draft of this paper were given at Notre Dame University, the University of Indiana, and the conference Logica 97 in Prague. I am grateful to those present for many helpful comments. I am particularly grateful to Paddy Blanchette, Andr Fuhrmann, Colin Howson, David McCarty, Gren Sundholm and Achille Varzi, for discussions that helped me to see a number of things more clearly.
d2(A|HHE!eG(HH
_'@d2(A|HHE!eG(HH
_'@
1; qr )
*
n
o
u
v
OP'[
d
EF]^89*+tuEF 1234569:;?@CDEFG]V]]VnuDP^GH12kl L!M!""
######)$*$9$:$$$y%z%%%&&#($())))G+V+W,X,^,_,,,,,,,,,,,---A-B-4/5///00000 0!0"0$0%0-0.0^0_00000000011111]]]VuDPnV]]Hʩ
ժItڬ0Rq0DbrǮ:گfְް;O{ѱEH
ճ/56ABPȴɴ˴̴δϴشڴrӵ,-.tuv~uDPcUc
uDVVnnV\ٶڶ/0ηϷ?@}~23FGԼ./N[s~ֽڿ89Y[/0
'(./23]VnuDPcuDPcPc[3EFtu&' !"#$UV()BC45:;QRxy )NOVh]]V]VnuDP]67ijkPcuUcnC[QjU
N!]$T'((),
-u-/0'001235D577H,HH,H,H,H,H,H,
H,H,H,
H,H,H,H,H,)7o9I=ADGHKOQR$R;RZRbRoRRqSySSSTX[:]_|ccc#f;hziiii%jY^(x#"џL`اާߧJ"q֩8n^Q[ji0`3+`,-.G`tuvwxyz{|}~ٶ/η?}.82EHH33'Et&UB4:Qx567j K*@*Normal ]a c"A@"Default Paragraph Font *@ Endnote Referenceh @ Footer! @ Header!$&@!$Footnote Referencece*@2*
Footnote Text 3c&OB&
references
|*OR*Normal+],>@b,TitleVc,k$Or$Autore
Zc$O$Affiliation
d8O8Sub Section TitleVU2O2Sub-sub-sectionW<VONormal7,O, Reference\3c,O,Formula(+@(Endnote Text 3c)@Page Number7
quE1L y"##%^),"7>@DELQS8ZEis0wwɃ\>U
J֤7N/C[4^2VTQd2
!"##
###
!ߤ7
7
U
"&D^^4"
+32=FoOX5a?iq9{&ߤߤߤߤ871 u ,
`P
s;qG1<wNZ.ozɆH3knopqrstuvwxyz{7l`Ej|}~ UnknownChristia MercerPowerMacintosh UserDept. of GeoinformationFrankSabine TimpfFrankSabine Timpf +
FrankSabine Timpf + FrankSabine Timpf +FrankSabine Timpf + FrankSabine Timpf +
FrankSabine Timpf +FrankSabine Timpf& FrankSabine Timpff !
Andrew U. FrankAbt. Geoinformation)Abt. GeoinformationAndrew U. Frank4)Abt. GeoinformationAndrew U. Frank4)Abt. GeoinformationAndrew U. Frank4)Abt. GeoinformationAndrew U. Frank4)Abt. GeoinformationAndrew U. Frank4)Abt. GeoinformationAndrew U. Frank4)Abt. GeoinformationAndrew U. Frank4)Abt. GeoinformationAndrew U. Frank4)Abt. GeoinformationAndrew U. Frank4)Abt. GeoinformationAndrew U. Frank5)Abt. GeoinformationAndrew U. Frank5)Abt. GeoinformationAndrew U. Frank5)Abt. GeoinformationAndrew U. Frank5)Abt. GeoinformationAndrew U. Frank5)Abt. GeoinformationAndrew U. Frank5)Abt. GeoinformationAndrew U. Frank5)Abt. GeoinformationAndrew U. Frank5 )Abt. GeoinformationAndrew U. Frank;g!)Abt. GeoinformationAndrew U. Frank;h"1Dept. of GeoinformationAbt. Geoinformations#David Rhind$USER%USERDavid RhindL
&USERDavid RhindL'USERDavid RhindL(USERDavid RhindL()USERDavid RhindL*USERDavid RhindL+USERDavid RhindL4,USERDavid Rhindfj-USERDavid RhindfjH.HUMAN RESOURCES/)Abt. GeoinformationAndrew __