Some Of Our Books


« Welcome Brad Cokelet! | Main | Institutions, Systems, Structures »

March 27, 2008


Feed You can follow this conversation by subscribing to the comment feed for this post.

My initial reaction is that the sorites here is like the sequences of shades between blue and green, or shapes between squarish and roundish. There are points where I don't know what to say about the situation, but I'm inclined to think that's because the situation is vague, not because I grasp the concepts (good/bad, blue/green, square/round) imperfectly. I'm not sure which of your responses this fits into.

It also occurred to me that there are people who think all lies are wrong, so this may not be a great example. But you could run the argument with less controversial examples: giving $1 to a beggar is good; giving him my entire net worth is not good (I need to provide for my family); somewhere in the middle things change over.

What happens if we try to apply the argument to a case where anti-realism is presumably correct (e.g. replace 'good' and 'bad' with 'judged +/-vely by ideal rational agents', or some such)? If the argument is sound then it shouldn't overgeneralize like this. But does it? (I can't immediately see where it would fail to carry over...)


I think your idea is close to 4. Accepting metaphysical vagueness is quite controversial though and so I tried to do without in the form of anti-realism I had in mind. You might be right about the example. But, I could have picked a better example of the morally not bad lie. For instance, it seems that Kant is amongst the very few that think that lying to the murderer at door is bad.


I think Williamson does want the argument to have a very general scope - pretty much all domains where one can speak of knowledge. So, it is supposed to go for even self-knowledge about one's own mental states like 'I'm in pain' or 'I feel cold' (his examples). In making these judgments, there needs to be the similar margin for error that can only be provided by unknowable facts. Thus, I think he would go for realism about the property of being judged positively by ideal rational agents. As a result, whether something is judged positively by ideally rational agents would not be dependent of the attitudes ideally rational agents have towards their own judgments.

Jussi --

I agree with practically everything that you say here. If Williamson's anti-luminosity argument works (and I think it does), it works across the board -- nothing prevents it from applying to ethics just as much as to everything else.

So really I think that of the five responses to your argument that you mention, only options (3) and (5) have anything to be said for them. Either way, we will have to dump the "epistemic" forms of anti-realism. (We will also have to disagree with Thomas Nagel when he says that "moral truth cannot outstrip our ability to know it".)

By the way, I don’t understand how your response 4 is supposed to work. Given classical logic, every lie is either bad or not bad. (That’s just a generalization of the relevant instances of the Law of Excluded Middle. You might want to tamper with classical logic in some way but if so you need to tell us how.) So according to the sort of response-dependence that you’re considering, the first lie where the opinions of the normal judges diverge is not a bad lie. Suppose that the last lie that is (definitely) bad is lie412. According to anti-realism, it must be knowable that this lie is bad. But intuitively, this can’t be knowable, since lie 412 is so incredibly close to lie413, which is not bad. If you believed that lie 412 is bad, your belief would be far too close to error to count as knowledge. So it seems that if lie412 is bad and lie413 is not, then you can’t know that lie412 is bad. So there are some moral truths that cannot be known.


A non-philosophical comment:

'The philosophy of philosophy' has also come out in paperback, at a very decent price.



you saying that you agree worries me *a lot*. I actually defend a sort of anti-realism about wrongness that is based on evidentially constrained notion of epistemic truth. In addition to Nagel, there are actually a lot of people in the literature defending the view that moral truths must be knowable for their action-guidingness - Jackson for instance if I remember correctly.

The reply 4 is actually something based on Crispin Wright's reply to Williamson. You are right that it rejects classic logic and namely the principle of bivalence - it has to admit truth-value gaps. But, I believe there are intuitionistic logics that can deal with such things. I know that some ordinary inference patterns would have to be given up but I'm not sure how damaging this is.

In the case you give, what is suppose to save the knowledge that the lie412 is bad is the fact that even though it is not true that lie413 is bad believing that it is is not a mistake or error because the best opinions diverge.


thanks. That's great news - I had only seen the hard-backs that were like £45.

Option 3 looks right to me, but "unknowable" ought to be more clearly disambiguated---does it mean "unknowable by any currently-existing actual human beings" or "unknowable by any idealized moral agents?" If I have the right handle on this, anti-realism entails that moral truths can be unknowable in the first but not the second sense.


you got it right. It is the latter. However, when we idealise moral agents here we cannot give them special faculties that normal human beings don't have. What we can give them is being in a right epistemic position (which no actual human happened to occupy), information that normal humans could gather, time and resources for inquiry, and so on.

There is a worry with the option 3 about how anti-realist position it would ultimately be. I take it that the motivation for anti-realism is to account for one set of contentious facts in terms of something less contentious. One way the facts that anti-realist uses in such an account can be less contentious is for them to be more easily epistemically accessible. Yet, in the option 3, the 'anti-realist' is accounting for moral facts in terms of mental facts no-one could ever know. At this point, it is questionable whether these facts are any less contentious than the original facts we wanted to give a theory of.


What is that, like, 600 US dollars?

I'm inclined towards option 5. I am worried about in-principle-unknowable facts about what is, e.g., funny. I think I am now leaning on there being some of those now too.

Jussi --

Wrongness and badness can still be generally action-guiding even if there are a few unknowable cases of wrongness or badness that couldn't be action-guiding. So long as we can normally know about, or at least have rational expectations of, the available actions' degree of wrongness, that seems enough to secure the action-guiding character of wrongness.

I have to say that I just don't understand how Crispin Wright's response to Williamson is supposed to work. Suppose that lie412 is bad (all suitably qualified judges agree), and lie413 is not definitely bad (the suitably qualified judges diverge) -- indeed, it's not in any way a mistake to think that lie413 is not bad at all. Then obviously even if it's not a mistake to believe that lie413 is bad, this belief couldn't count as *knowledge* that lie413 is bad. Indeed, this belief wouldn't fall just slightly short of being knowledge -- it falls very far short, almost (if not quite) as far short as believing a proposition that is definitely false. So given how incredibly similar lie412 and lie413 are, surely lie412 is still far too close to these borderline cases to count as knowledge.

I have misgivings about what I gather is a reliabilist assumption behind Williamson's argument (which I haven't read, I'll admit). Reliabilism makes sense for perception, perhaps, but it's not clear that it's plausible for knowledge of all other matters. If we start with an antirealist metaethics according to which the alethic status of moral propositions -is- a function of the intentional attitudes of actual or ideal moral agents, etc., then it's not clear there's anything out there for our perceptual apparatus to "bounce off", so to speak. Thus, I wonder if the reliabilist assumption here is begging the question.

To carry this a step further, if internalism about epistemic justification for our -moral- beliefs is correct, then when we cross from lie412 to lie413, aren't we simply passing from one true justified belief to one false justified belief? How does that mean we lack knowledge concerning lie412? This seems a harmless sort of epistemic luck.

What am I missing here?

John K. Davis

An observation: if we begin at the other end of the sequence, i.e. the neutral-lie end, we will infer from the (x+1)==x rule that all the lies are neutral. This still sets up a reductio conclusion, of course, but the fact that the status of all the lies is diametrically opposite, suggests that our rule of inference is likely faulty. I think we should disallow this premise:

"This reliability, furthermore, requires that in all relevantly similar lying-cases that could easily arise and which ‘one could easily fail to discriminate from the given case, it is true that’ these lies are also bad."

We ought to substitue: "Reliability requires that very similar cases are adjudicated to have very similar status."

...and where the similarity relation, unlike the equality relation, is not transitive.

This is a fairly generic reply to any 'heap' paradox, of course. What's actually interesting about a 'margin of error' argument doesn't seem to hinge on being embedded in a sorites sequence, however. The key issue is whether two cases with _starkly different_ actual moral status, could at the same time be _imperceptibly different_ to a qualified judge.

The (revised) reliability premise says not. And I don't see any problem with constructing cases with arbitrarily similar perceived moral status. Therefore: actual(!) moral status must come in values separated by no more than the margin of error of an ideally-qualified judge.

Apparently, the margin of error argument doesn't tend to show anything about realism or antirealism. Rather, it shows something about the phenomenon that both are addressing.

Pardon my clumsiness: in the penultimate paragraph, please ignore "The (revised) reliability premise says not" and substitute, "An appropriate assumption about knower-reliability would say not." I forgot to add it on mine :-)

Dear all,

sorry for lack of replies. No internet access on weekends... If anyone's still following, here's few thoughts.


Dollar isn't quite that weak yet. £45 is a bit over $90 which for a book seems a lot. I share your worry on funniness facts and there are similar worries in the Williamson/Wright exchange on whether rhubarb is yammy or yacky. I think the need for making room for at least some judgment dependent facts is behind the motivation to develop something like the option 4 which I seem to be failing to explain to Ralph.


I'm not sure why the belief in the badness of the lie413 falls *very far* from knowledge (even though it does). It helps to think for me that, in these cases and under the view we are discussing, the best opinions are constitutive of the objects having the property in question. Thus, we need to be able to describe the normal judges in a conceptually independent way. Such agents have a disposition to react to the presented cases. If they react in an uniform way, then the object has the given property in virtue of this.

Maybe in the sorites sequences such judgments start to diverge at one point. I don't see how this calls into the question the earlier judgments. Any one individual can be reliable of the earlier ones - she would not have got that one wrong. In the cases where the judgments diverge, there isn't anything to get wrong about - unless one makes a performance-error. You are probably still not happy with this. I'm sorry.


I'm not sure if you are missing anything at all. I wonder if the anti-realist starting point calls reliabilism into question. If anti-realism were true, then the truth-values of moral claims would be determined by the attitudes of either actual or ideal judges. Moral knowledge, as getting moral judgments reliably right, would then consist of getting the judgments about the attitudes right (and possibly knowing that moral truths are dependent of them). The same argument would still get started.

I guess the idea is that knowing any particular proposition to be true cannot be accidental. But, it does seem accidental if in similar cases one is making mistakes.


I'm sorry I'm not sure I follow the argument. Could you try again a bit slower.

In response to Jussi's comment that, "If anti-realism were true, then the truth-values of moral claims would be determined by the attitudes of either actual or ideal judges. Moral knowledge, as getting moral judgments reliably right, would then consist of getting the judgments about the attitudes right (and possibly knowing that moral truths are dependent of them). The same argument would still get started."

Yes, I think this is the path a reliabilist must take here. I wonder how it would get started, though, without turning into a coordination problem where each agent tries to figure out what all other agents are thinking, and all other agents are doing the same. We might look to an ideal agent, for example, but the ideal agent probably wouldn't look to other ideal agents. There must be something the attitudes are tracking other than other attitudes, so the reliabilist account would have to include some description of what that "something" is.

This is not to say that a reliabilist couldn't get around this, just that I'm inadequate to that task, and not sure how it might be done.


My worry about what you label as response 4 to Williamson's argument (as applied moral conditions like wrongness, etc.) is basically quite simple.

Response 4 doesn't in any way give up the idea of a sharp boundary: It's just that the sharp boundary that is recognized by response 4 isn't the boundary between *being wrong* and *not being wrong* (i.e. the boundary between truth and falsity for judgments of wrongness). Instead, it is the boundary between *being definitely wrong* and *not being definitely wrong* (i.e. the boundary between truth and lack of truth for judgments of wrongness).

Presumably, any proposition that is known must be true. So no proposition that is in the grey zone between truth and falsity can be strictly speaking known.

Now Williamson has argued, convincingly as it seems to me, that knowledge requires allowing a margin for *error* (in the sense that if the case is too close to the borderline between truth and falsity, you can't know that the relevant condition obtains in that case). But the very same considerations seem to show that knowledge requires allowing a margin for *failure to achieve truth* (in the sense that if the case is too close to the borderline between truth and the grey zone between truth and falsity, it is still impossible to know that the condition obtains in this case).

So this is why it seems to me that Williamson's argument still works if we postulate a grey zone between truth and falsity, as your response 4 seems to do.

Small comment. Ralph wrote:

"Presumably, any proposition that is known must be true. So no proposition that is in the grey zone between truth and falsity can be strictly speaking known."

I wonder if this should be granted.

Suppose that P entails Q. Suppose it is indeterminate that that Q. It follows, I suppose, that it is not determinately true that P. But it doesn't follow that it is false that P. P might be indeterminate.
Likewise, couldn't one say about these borderline cases that it is indeterminate whether they are known?

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.