Monday, 25 June 2007

The St Andrews Dolphins

The Conditional Probability Solution to the Swamping Problem (Carter)

Note: The following is a cross-post written by J. Adam Carter, from over at Virtue Epistemology.

Goldman and Olsson (forthcoming) in “Reliabilism and the Value of Knowledge” offer several insightful responses to the ‘swamping problem.’ I think that the ‘conditional probability’ solution that they offer is the most interesting; evaluating this solution requires attention to some important, and sometimes unnoticed, aspects of the problem.

The swamping problem has been articulated a variety of ways, and unfortunately, different versions of the problem have been referred to under the same label.

Here’s a general and (hopefully) uncontroversial formulation of the problem, as presented by Goldman and Olsson:

Template Swamping Argument

(S1) Knowledge equals reliably produced true belief (simple reliabilism)
(S2) If a given belief is true, its value will not be raised by the fact that it was reliably produced.
(S3) Hence: knowledge is no more valuable than unreliably produced true belief. (reductio)

(S2) of the argument expresses what has been called the ‘swamping premise.’ Of course, (S3) is counterintuitive, and so the idea is to either reject the swamping premise, or to reject simple reliabilism (S1).

The swamping premise expresses a conditional claim. Those who want to save reliabilism are burdened, as it were, to show how a reliably produced true belief is more valuable than an unreliably produced true belief.

Kvanvig (2003) throws down the gauntlet at this point and suggests that we can, in principle, rule out a rejection of (S2).

His suggestion is that reliability is a valuable property for a belief to have insofar as it is valuable for a belief to be ‘objectively likely to be true.’ He argues that for a reliabilist to suppose that reliability is a valuable property for a belief to have for reasons other than its being likely to be true (i.e. say, because the “normative dimension that accompanies the right kind of objective likelihood of truth introduces a new valuational element distinct from the value of objective likelihood” (Kvanvig 2003b, p. 51) would seem magical, he says, “like pulling a rabbit from a hat” (51).

He argues further that being ‘objectively likely to be true’ isn’t a property that, when added to a true belief, increases its value, and thus, (S2) is true.

Goldman and Olsson take issue with Kvanvig’s reasoning here for a couple of reasons. First is what I’ll call the ‘entailment’ objection. Goldman and Olsson think that Kvanvig overlooks the fact that although being reliabily formed entails being likely to be true, being likely to be true doesn’t entail being reliably formed. They say:

John may have acquired his belief that he will contract lung cancer from reading tea leaves, an unrealiable process, and yet if John is a heavy smoker, his belief may well be likely to be true” (Goldman and Olsson, p. 8)

Goldman and Olsson overstate what they take to be the crime here. This example would damage Kvanvig’s view only if Kvanvig actually defended that the entailment goes both ways, that is, that (as Goldman and Olsson attribute to him) “Being produced by a process that normally produces true belief just means being likely to be true” (Goldman and Olsson 8). Kvanvig says nothing to pin him to such a biconditional. His view is, rather, that the extent to which being produced by a reliable process is a valuable property for a belief to have is exhausted by the extent to which being likely to be true is a valuable property for a belief to have. And so, an objection to Kvanvig’s claim here should take the form, rather, of pointing out some feature of being produced by a reliable belief forming process that is valuable for a belief to have in a way that is not reducible to the value that a belief would have qua being objectively likely to be true.

This is, indeed, the route they go in their ‘conditional probability’ repsonse. They argue that being produced by a reliable belief forming process can be valuable for a belief in a way that merely being objectively likely to be true isn’t valuable, and further, that its value is such that when combined with a true belief, yields a collectively more valuable whole. They write:
“Knowing that p is more valuable than truly believing that p. What is this extra valuable property that distinguishes knowledge from true belief? It is the property of making it likely that one’s future beliefs of a similar kind will also be true. More precisely, under reliabilism, the probability of having more true belief (of a similar kind) in the future is greater conditional on S’s knowing that p than conditional on S’s merely truly believing that p. (p. 16)
This claim, if correct, would amount to a counterexample to the swamping premise, which recall, says:

(S2) If a given belief is true, its value will not be raised by the fact that it was reliably produced.

Goldman and Olsson, thus, think that a true belief will be more valuable if produced by a reliable process because, as such, it contributes to the diachronic goal of having more true beliefs (of a similar kind) in the future. I want to turn to an example that helps illustrate their idea; it is the espresso example Zagzebski uses to support the swamping premise. Goldman and Olsson write:
If a good cup of espresso is produced by a reliable espresso machine, and this machine remains at one’s disposal, then the probability that one’s next cup of espresso will be good is greater than the probability that the next cup of espresso will be good given that the first good cup was just luckily produced by an unrealiable machine. If a reliable coffee machine produces good espresso for you today, and it remains at your disposal, it can normally produce a good espresso for you tomorrow. The reliable production of one good cup of espresso may or may not stand in the singular-causation relation to any subsequent good cup of espresso. But the reliable production of a good cup of espresso does raise or enhance the probability of a subsequent good cup of espresso. This probability enhancement is a valuable property to have (p. 16)
This attempted assault on the espresso analogy scores a victory at the expense of betraying a deeper, and perhaps untractable, defect in the ‘conditional probability’ response. The victory, in short, is that it gives an explanation for why two equally good cups of espresso might be such that one is more valuable than the other; this explanation rejects an assumption that Zagzebski seemed to make in the analogy, which is that ‘taste is all that matters’ for espresso (as she thought, ‘being true’ is what matters for a belief).

This sword cuts two ways, though. Consider that True Temp is a reliable belief former, and so the conditional probability of his future beliefs (of a similar kind) being true is greater given that they are formed from a reliable process (i.e. a reliable thermometer, perhaps purchased at the same store as you’d find a reliable espresso maker), than it would be had his beliefs been merely true, but unreliably produced. But, we should object, True Temp is not a knower, and so whatever makes his state valuable should not be as valuable as it would be if he were a knower. However, the conditional probability view has no way to explain this. In sum, the conditional probability response to the swamping argument works only if True Temp knows. But he doesn’t. So it doesn’t work. (Or so my objection goes…)

Here’s a second objection:

Suppose I have cancer and am in the hospital, and my 12 year old boy (I don’t really have one) is playing baseball in the little league world series. He has been practicing every day from sun up till sun down in hopes of making it to the world series and hitting a homerun. It is the ninth inning of the game, and my son (little Johnny) is up to bat. I am watching the television screen with intensity as he shouts (this one is for you, Dad!). The pitch is on the way, and then……

(Option A): The television suddenly blacks out. Knowing I might die any minute, I decide that Johnny has practiced hard and probably hit a home run, and so I believe that he did, although sadly, I realize I will never know. (And then I die, my last thoughts being ones of curiosity).

(Option B): The television does not black out, and I get to see Johnny hit the home run on TV. In fact, (for even more evidence) my hospital is close to the baseball field, and the ball comes through the window and lands on my bed. I know that Johnny hit the home run, and then I die (in peace).

On the conditional probability view, my true belief in Option B (in which I form my belief from reliable processes, i.e. watching a previously non-deceptive TV broadcast, which doesn’t display phantom images) is more valuable state than my true belief in Option A in so far as the true belief in B was produced by a reliable process, and as such, raises the probability that future beliefs (of a similar kind) will be true. However, as I know I am dying, I have no interest in future beliefs, as I am aware I am in my last throes. (And, in fact, I don’t form any more future beliefs of a similar kind). The conditional probability approach, then, seems committed to claiming that my true belief in B is no more valuable than my true belief in A. But surely that’s not true!

Friday, 22 June 2007

St Andrews Graduation 2007

The Procession



The Convocation










The Garden Party



St Andrews Graduation 2007

The Procession



The Convocation










The Garden Party



St Andrews 2007 Graduation Pictures

are here.

Monday, 18 June 2007

Defending the Lottery Argument (Part 2)

In this post, I will address Aidan's second objection to my lottery argument. By his lights, even if we grant whatever closure (or conjunction) step I need to make the aforementioned inference, premise (A6) still seems in need of defence. For example, suppose that S, due to some introspective failure, does not recognise that what she may justifiably believe about this lottery has the form of (a*). This is consistent with S's recognizing that if she believed something which had the form (a*) she would be believing a set of inconsistent propositions.

The upshot, according to Aidan, is that (A3) and (A4) might be true, and my opponent might grant me whatever I need to conclude from that that what S may justifiably believe about this lottery is (by (A5)) inconsistent. But even in the presence of (A6) that's not enough for (A7); it's not enough that S recognize that something of the form (a*) would be inconsistent – she must also recognise that what she may justifiably believe about this lottery has the form (a*). And according to Aidan, therein lies the rub. The number of tickets n might be very large (and the larger n is, the more plausible (A1) and (A2) are). Why, then, should we accept that S is able to recognize that the huge set of beliefs she may justifiably have, has the form of (a*)?

Now, I take (A6) to be true by hypothesis. Moreover, I take it to be an intelligible hypothesis, a fact that our own ability to understand the lottery argument makes obvious. But just so that it is clear that I am not begging any questions in so doing, some clarification and defence of (A6) may be in order. I do not believe that it is impossible for a lottery subject, due perhaps to some sort of introspective failure, to fail to recognise that what she may believe about the lottery takes the form of (a*). However, it think that such an oversight is no-where as likely as Aidan makes it sound. Given the set-up of the lottery argument, the lottery subject knows all of the following:
(i) That she is playing a lottery,
(ii) That the lottery is composed of a million tickets,
(iii) That one ticket must win and only one ticket can win,
(iv) That the odds of her ticket losing are the same as that of any other ticket losing
(v) That she is no more justified in believing that her ticket will lose than any other
Given that the subject is aware of (i)-(v), it would be odd for her not to realise that what she may justifiably believe about the lottery, given (A1), takes the form of (a*). More importantly, such an individual would most certainly be guilty of gross introspective and rational failure and may, eo ipso, count as doxastically irresponsible. Thus, we seem to have independent grounds for holding that such an individual is not justified.

Moreover, I think there is something misleading about Aidan’s claim that the likelihood of the lottery subject failing to realise that what she may believe about the lottery takes the form (a*) increases the larger we make the overall pool of tickets. After all, I have presented the argument with a million tickets, and this fact does not make the lottery argument any more difficult to understand than if I spoke instead of a hundred or a thousand tickets. That is to say, what is important is the formal structure of the reasoning implicated, rather than the individual application of the reasoning. S only needs to recognise that the inference she makes with regards to her own ticket may, by parity of reasoning, also be made regarding every other ticket, in order to recognise that what she believes about the lottery takes the form of (a*). She does not actually have to set about the arduous task of applying the relevant inference to each individual ticket. Thus, I see little motivation for thinking that a doxastically responsible subject may be mistaken on this question.

But let us grant that some given subject does not recognise that what she may justifiably believe about the lottery takes the form of (a*). I do not see how this poses a problem for the lottery argument. Notice, the conclusion of the lottery argument merely implicates what S may justifiably believe, not what she actually believes. To say that one may justifiably believe p is to say that it is rationally permissible to believe p. The question with which we are presently concerned, then, is a normative one regarding what is rationally permissible for S to believe. The conclusion of the reductio is that it is rationally permissible for S to believe something she recognises to be inconsistent, and this is a conclusion I take to be unacceptable. Whether or not S actually does recognise that her belief is inconsistent is, as far as the present argument is concerned, beside the point.

In this regard, the lottery argument employs a similar strategy to Boghossian-type arguments against the compatibility of self-knowledge and content externalism. For the Boghossian argument to be effective, a subject need not actually come to believe some fact about her environment, for example, that she inhabits a planet with H2O, by means of a priori introspection. In fact, given that people are not typically shuttled between Earth and Twin Earth, the scenario described by Boghossian arguments may turn out to be purely hypothetical. (Though some philosophers have argued that there are in fact real life slow-switch scenarios, such as a speaker switching between British to American English.) However, the mere fact that it is possible for the subject to acquire such knowledge is seen as sufficient to impugn the content externalist position. The intuition here is that certain epistemic achievement should not be possible. Analogously, the intuition underlying the lottery argument is that a certain epistemic achievement should not be permissible—namely, a subject believing something she recognises to be inconsistent. The point of the reductio is that we should reject (A1) since it, in conjunction with a number of other plausible premises, suggests that believing a set of propositions one knows to be inconsistent is permissible. (Again, whether or not some particular subject actually does come to believe something she recognises to be inconsistent is beside the point.)

Notice, in the above reply, I have interpreted Aidan as claiming that a subject may recognise that she is in a lottery type situation without realising that what she may justifiably believe about the lottery takes the form of (a*). But one may well ask, what about the case in which the subject fails to recognise that she is in a lottery-type case altogether. In such a case, (A6)—which my lottery argument simply assumes by hypothesis—does not even apply. The lottery argument would therefore (apparently) fail to cover such cases. This is the objection raised by Clayton Littlejohn, and will be the focus of my next post on this topic.

Monday, 11 June 2007

Defending the Lottery Argument (Part 1)

Over the weekend I finally had the honour of meeting Aidan McGlynn, of The Boundaries of Language fame, who was one of the delegates at the Arché Vagueness Conference here at St Andrews. Aaron Bogart, from over at Struggling to Philosophize, was also at St Andrews over the weekend and promises to visit again later this summer. Looking forward to sharing a pint and a chin-wag with you again, Aaron.

Now, down to business. In honour of Aidan’s very thoughtful comments, I’ve decided to dedicate this (and the upcoming) post to responding to his two objections to my lottery argument. Aidan's first objection is that my argument presupposes some type of closure principle. While he does not actually spell out any particular closure principle, given that my argument has to do with justification, he presumably has something akin to the following justification closure principle in mind:
(JCP) If S may justifiably believe p and p entails q, and S competently deduces q from p and accepts q as a result of this deduction, then S may justifiably believe q.
According to Aidan, the inference from (A3) and (A4) to the claim that what S may justifiably believe about this lottery has the form (a*) seems to rely on some such closure principle. However, Aidan asks, ‘what mandates that [my] opponent accept such a closure principle? If anything, it looks somewhat suspect once one allows that justification does not require probability 1.’

First, I should point out that strictly speaking, the inference from (A3) and (A4) to the claim that what S may justifiably believe about the lottery takes the form of (a*) does not rest on (JCP), or any other type of closure principle. Rather, it rests on the following conjunction rule:
(CR) If S may justifiably believe p at time t and justifiably believe q at t, then S may justifiably believe p and q at t.
However, a closure principle does seem to make an appearance elsewhere in the proof. According to the set-up of the argument, S is aware that one ticket must win and only one ticket can win. Given (JCP), it follows that S may justifiably believe that it is not the case that t1 will lose, and t2 will lose, and so on, to the millionth ticket. Given this fact, a few brief words in defence of my argument's reliance on (JCP) may still be in order.

I begin by noting that I do not share Aidan’s intuition that I have no right to expect my opponent to accept (JCP). (JCP), and closure more generally, is accepted by the vast majority of philosophers. Of course, some thinkers (most notably Nozick and Dretske) have challenged certain formulations of closure. But theirs is the minority position. (This of course does not mean that closure is true, but it seems sufficient to establish its status as the default position.) Moreover, it is worth pointing out that the version of closure implicated in the lottery argument is quite modest. If someone may justifiably believe that p, and competently deduces q from p, it seems quite reasonable to think that she would have reasons adequate to justify her belief in q. In fact, I'm not even sure Nozick and Dretske would be opposed to (JCP), since it merely implicates justified belief, rather than knowledge. (I'm not trying to be coy here, I'm honestly not sure what Nozick and Dretske would say about (JCP). Does anyone care to fill me in on what, if anything, Nozick and Dretske have to say about justification closure?)

Interestingly, (JCP) is one of the essential premises in Gettier’s original argument against JTB (see: 'Is Justified True belief Knowledge'), and rejecting (JCP) would therefore constitute a rather straightforward way of resisting Gettier's conclusion. However, I think the fact that relatively few philosophers have taken this route (I only know of one) is testimony to just how widely accepted (JCP) is and the unwillingness of philosophers in general to give it up. Again, this does not establish that (JCP) is true. However, it vouches eloquently for its status as the default position in discussions of this kind. Finally, and perhaps most importantly (depending on your philosophical commitments), (JCP) does seem to have the endorsement of common sense.

That being said, I do not believe that the lottery argument necessarily presupposes (JCP), since S has independent reason for justifiably believing that t1 might not lose, t2 might not lose and so on. Even if closure where false, S's knowledge of how a fair lottery works, would be enough to justify her belief that any given ticket might not lose.

I believe what is doing the real heavy-lifting in the lottery argument is (CR). I take (CR) to have no less intuitive support than (JCP). Thus, substantive argumentation is necessary if one is to urge the rejection of (CR). Again, I believe the burden of proof rests on my opponent in this regard.

However, a brief word of caution and clarification may be order with regards to (CR). Many erroneously conflate the conjunction rule regarding epistemic justification (which is highly plausible) with the conjunction rule regarding probabilities (which seems clearly false). This mistake is particularly common in discussions of lottery-type cases which explicitly make reference to probabilities. To see that the conjunction rule regarding probabilities is probably false one simply has to recall that (on a standard probability analysis) the probability of a conjunction is less than that of the individual conjuncts. Thus, adding enough conjuncts can render a conjunction of individually probable items improbable.

However, the same is not true of a conjunction rule regarding epistemically justified beliefs. In contradistinction from probability, justification is generally preserved, and may even be strengthen, when we add new justified beliefs to our set of beliefs. Sharon Ryan puts the point much more succinctly than I can:
Imagine that a person S is justified in beliving that following list of individual claims: I am in an airport, I see a plane taking off from the runaway, I hear a loud noise, The noise I hear is probably a plane, The noise I hear is probably not a giant snoring. Imagine S forms a conjunction of all of the individual claims listed above. It seems reasonable to think that the justification for the conjoined beliefs is no weaker than is the justification for all the individual conjuncts. If anything, justification is strengthened by conjoining this set of individual beliefs. It is not true that the more conjuncts you add, the less justified the set becomes. (Ryan, 'Epistemic Virtues of Consistency', p. 124)
If Ryan is right, and I think she is, then although a conjunction rule regarding probability may be false, the conjunction rule regarding epistemic justification is true. Moreover, it is the latter, rather than former, that is implicated in the inference from (A3) and (A4) to the claim that what S may justifiably believe about the lottery takes the form of (a*).

Finally, it should be noted that rejecting the conjunction rule regarding justification has a rather counter-intuitive consequence—namely, that a subject may justifiably believe a set of statements she recognises to be inconsistent. (This does not, however, follow from rejecting the corresponding rule regarding probability which is altogether silent on the question of epistemic justification.) While a subject who believes that 'each ticket will lose' and also believes that 'not all of the tickets will lose' is not believing a contradiction, it is still not possible for all the members of her set of beliefs to be true. Since a lottery subject may be aware of this fact, then rejecting (CR) means that a subject may justifiably believe a set of statements which she knows cannot all be true. Personally, that strikes me as odd.

Saturday, 9 June 2007

Richard Rorty, 1931-2007

The following obit is taken from today's issue of Telos:

Richard Rorty, the leading American philosopher and heir to the pragmatist tradition, passed away on Friday, June 8.

He was Professor of Comparative Literature emeritus at Stanford University. In April the American Philosophical Society awarded him the Thomas Jefferson Medal. The prize citation reads: "In recognition of his influential and distinctively American contribution to philosophy and, more widely, to humanistic studies. His work redefined knowledge 'as a matter of conversation and of social practice, rather than as an attempt to mirror nature' and thus redefined philosophy itself as an unending, democratically disciplined, social and cultural activity of inquiry, reflection, and exchange, rather than an activity governed and validated by the concept of objective, extramental truth."

At the awards ceremony, presenter Lionel Gossman celebrated Dr. Rorty as an advocate of "a deeply liberal, democratic, and truly American way of thinking about knowledge." Dr. Rorty's published works include Philosophy and the Mirror of Nature (1979), Consequences of Pragmatism (1982), Contingency, Irony, and Solidarity (1988), Objectivity, Relativism and Truth: Philosophical Papers I (1991), Essays on Heidegger and Others: Philosophical Papers II (1991), Achieving Our Country: Leftist Thought in Twentieth Century America (1998), Truth and Progress: Philosophical Papers III (1998), and Philosophy and Social Hope (2000).

Monday, 4 June 2007

Lottery Argument Against Defeasible Evidence

This post is an updated version of one I published over at the Web of Belief.

I wish to argue that it is a conceptual requirement of justification that it be factive. On this view, it is a conceptual requirement vis-à-vis some type or token reason {R}, that {R} may only justify a subject’s belief that p if {R} guarantees the truth of p. When {R} meets this stipulation, I will describe {R} as a factive reason for believing that p. I contrast having a factive reason for p with having evidence for p, in which evidence is essentially defeasible. Typically, when we describe some evidence {E} as defeasible, we mean that {E} may be evidence for p despite the fact that {E} ∪ {E*} is not evidence for p. In such a case, we would say that {E*} defeats {E}, or that {E*} is a defeater for {E}. In the discussion that follows, I will be using the expression ‘defeasible’ more broadly to refer to any evidence {E} for p, in which {E} fails to guarantee the truth of p. I take defeasible evidence to include a wide cross-section of non-truth-entailing evidence or reasoning, including inferences to the best explanation, abduction, analogical reasoning and probabilistic reasoning. What unites all these various theses is that they offer a criterion of justification that falls short of truth. On my view, it cannot be part of our concept of justification that {E} may justify a subject’s belief that p if {E} is construed as defeasible evidence for p.

For the sake of simplicity, I will be construing the notion of defeasible evidence primarily along probabilistic lines. (This is only for heuristic purposes since I do not believe our quotidian concept of evidence, even when defeasibly construed, is essentially probabilistic in nature.) Along these lines, some evidence, {E}, may be described as defeasible vis-a-vis some proposition p just in case the probability of p on {E} is less that one. I will say that p is likely when p has a greater probability than not-p or when p exhibits a probability of {> 0.5 • < class="fullpost">† will take the form of a reductio beginning with the assumption, ‘S may justifiably believe that her ticket, t1, will lose’, and concluding with the negation of the aforementioned truism. Assuming that the first premise is the least plausible of all the premises in my argument, then my argument should establish that my first premise ought to be rejected. My reductio runs as follows:
(A1) S may justifiably believe that her ticket, t1, will lose.

(A2) If S may justifiably believe that t1 will lose, then she may also justifiably believe that t2 will lose, she may justifiably believe that t3 will lose ... she may justifiably believe that ticket tn will lose.

(A3) S may justifiably believe that tickets t1, t2 ... tn will lose. [from (1) and (2)].

(A4) S may justifiably believe that either t1 will not lose or t2 will not lose ... or tn will not lose.

(A5) Propositions of the following form comprise an inconsistent set: (a) p1, p2 ... pn, either not-p1 or not-p2 ... or not-pn.

(A6) S recognises that propositions of the following form comprise an inconsistent set: (a*) t1 will lose ... tn will lose, either t1 will not lose ... or tn will not lose.

(A7) S may justifiably believe a set of inconsistent propositions that she recognises to be inconsistent. [from (3), (4), (5), and (6)].
Expressed in probabilistic terms, I take the lottery argument to show that a subject is not justified in believing that p based on evidence that makes p probable with a probability less than one. In my next post, I will respond to the objection that my lottery argument does not generalise across all cases of beliefs implicating defeasible evidence.


† See Dana Nelkin's paper “The Lottery Paradox, Knowledge and Rationality” for a discussion of the lottery paradox regarding knowledge and justifiably held belief.