Some Of Our Books

Categories

« Belief, Action, and Rationality over Time | Main | Workshop for Oxford Studies in Political Philosophy-Final Notice »

July 25, 2015

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Several of the negative modal constructions here are ambiguous. Let's focus on "you ought not believe p." Here are three different ways of formalizing that:
A. -OBp
B. O-Bp
C. OB-p

A doesn't say what you ought to do. It just says that it's not the case that you ought to believe p. This is compatible with (s1) believing p; you just don't have a doxastic duty to do so. B says that there is something that you ought to do, and that something is not believe p. B is compatible with either (s2) believing that not-p or (s3) witholding belief from both p and not-p. B isn't compatible with scenario s1. C says that you ought to believe not-p, and so it's only compatible with s2. This suggests (and it's not hard to prove) that C is strictly stronger than B.

A doesn't seem to work with 2. If we use A, then 2 is equivalent to "if it's permissible for you to believe p, then you ought to believe p." This is incompatible with doxastic supererogation.

But both B and C seem too strong for 1. Since C implies B, it suffices for me to argue that B is too strong.

So consider cases where evidence is scant but I believe anyway. Suppose my small child or beloved pet has gone missing. I don't have evidence that they're safe and will be found soon, and so it doesn't seem rational for me to believe that they're safe and will be found soon. But I will cling tightly to this belief for several days, perhaps even several weeks. I'm not doxastically required to not hold this belief. (And indeed, I'm not doxastically required to believe that either they're in danger or will not be found soon: C is far too strong here.)

These sorts of cases are also very common in scientific practice. A scientist becomes convinced of a certain hypothesis on little or no evidence, and this conviction leads them to gather enough evidence to actually justify the hypothesis. (Or at least *try* to gather enough evidence.)

Thanks, Dan! I am afraid to say, however, that I disagree with almost everything that you say here.

First, my formulations were not ambiguous: it should be quite clear to any competent speaker of English that the logical form of 'You ought not to believe p' is 'O-Bp' (i.e. B according to your labelling).

Secondly (according to your labelling) C does not entail B. That is, "You ought to believe not-p" does not logically entail "You ought not to believe p". It is at least logically possible that it might be simultaneously the case that you ought to believe not-p and that it is permissible for you to believe p. It is a substantive claim about the norms of belief that we ought never to believe inconsistent propositions.

Thirdly, in the case that you consider, I would say that if it is not rational for you to believe that your missing pet is safe and will soon be found, there is a sense in which you "ought not" to believe this. Perhaps there are some senses in which it would not be true to say that you "ought not" to believe this -- it would be distressing and unhelpful for you not to believe it at this point, perhaps -- but there is surely a clear sense in which it is true to say that you "ought not" to believe it.

'Ought', as all the linguists and formal semanticists would agree, is a polysemous or context-sensitive term, expressing different concepts in different contexts. Your case doesn't come close to showing what you need to refute my argument, that there is no sense in which, in a case of this kind, you "ought not" to believe that your missing pet is safe and will soon be found.

Re: Your recent post on Rationality is permissibility.
5. "If it is not rational for you to believe p, it is irrational for you to believe p"
seems hasty. It is not rational for me to believe that:
The sender's address marked "noreply+feedproxy@google.com" is a good indication that the sender deliberately intended to fool readers into thinking that PEA Soup is run by a bunch of metaphilosophy crazed Martians.
It may be uncharitable, silly, a waste of time, confused for me to believe this (see above), but it is mot necessarily irrational.
Epistemic actions like believing , doubting, denying are neither permissible nor impermissible..... Some might take the slippery slope & say that they are such attitides are always permissible, never impermissible. But why bother?

Thanks, Amelie!

I accept that some things -- rocks or numbers, for example -- are neither rational nor irrational. But I find it hard to accept that a belief might be neither rational nor irrational.

Beliefs always involve the believer's making some use of her powers of reasoning and thought. These powers are always either (a) used properly or (b) not used properly: in the former case (a), the beliefs are rational; in the latter case (b), they are irrational.

I do not agree that beliefs are "epistemic actions", but even if they were, I don't see why it would be plausible that they were always (or indeed ever) "neither permissible nor impermissible". Indeed, it seems intuitively clear to me that all actions are either permissible or impermissible.

So I'm not convinced that you have raised any persuasive objection to my premise (5).

Re: "all actions are either permissible or impermissible."
Surely the permissibility of actions are relative to context? Absent the specification of its context, an action type might be either, neither or both.

I'm confused about how strong of an argument you're claiming to offer. The post itself purports to offer an argument that applies "if the notions of a belief’s being “justified” or “rational” are normative at all," asserts that the premises "would surely be accepted" or are "obviously true," and so on. Here you seem to be saying that your argument applies to almost any sense of "justified," "rational," and the other key terms.

But then, in response to my earlier comment, you assert that, in order to refute your argument, I need to show "that there is no sense in which" your premises are false. Here you seem to be saying that your argument applies to only certain senses of the key terms, perhaps even only ones that apply in certain contexts or reflect a narrow set of considerations.

Amelie,

When I said "All actions are either permissible or impermissible", I wasn't talking about act-types!

There could be other interpretations, but according to the way in which I tend to think of the matter, each of these "actions" is in effect a possible state of affairs, consisting in a certain agent's acting in a certain way at a certain time. In this sense, each action has a "context" already built into it.

Dan,

I'm sorry: I was trying to be brief, and so didn't explain things fully.

My argument is meant to apply to every normative sense of 'rational' and 'justified'. What I was arguing for, more precisely, was the following: For every such normative sense of these terms, there is some corresponding use of the terms 'ought' and 'permissible' that satisfies the premises of my argument.

Hi Ralph,

You've convinced me when it comes to belief (but then, I'm not an epistemologist). But I wonder how far you want the argument to generalize; anyway I don't think it generalizes to action.

Suppose your options are doing X and doing Y. Doing X is morally better than doing Y, but it is also supererogatory: doing Y would not be wrong. So here are two propositions. Both are obviously controversial (in fact, I don't believe either), but neither (I'd think) is obviously false.

(i) If doing X is morally better than doing Y, it is not rational to do Y.
(ii) If doing Y is not wrong, doing Y is not impermissible.

But if this is right, it follows that the practical analogue of at least one of your premises is at least not obvious. For what it's worth, I'd give up (2), but I think there's room to doubt any of (5)-(7).

Thanks, Ulysses!

I would say that my position is even more obviously true of rational choice than it is of rational belief.

Consider e.g. a "Buridan's Ass" case, in which there are two mutually incompatible options, A and B, such that no alternative options are more strongly supported by the balance of reasons, and neither of these options is any more strongly supported by reasons than the other. Then clearly, it is rational to choose A and also rational to choose B. But neither choice is rationally required: it is permissible not to choose A, but to choose B instead, and equally it is permissible not to choose B but to choose A instead.

In your case, it seems quite clear that it is perfectly rational to choose the non-supererogatory option.

However, it may take a bit of reflection to see that sometimes when "doing X is morally better than doing Y" (as you put it), doing Y is not morally wrong, and doing X is supererogatory. It is this, and not any problems with my premises, I think, that explains why your proposition (i) is not obviously false.

If the argument here is supposed to show that calling a belief 'rational' is nothing stronger than calling it permissible, I don't think it works, because the argument still holds when 'good' is substituted for 'rational,' and I assume we agree that 'good' is stronger than 'permissible.' Here's what it would look like:

1. If it's not good (i.e., bad) for you to believe/do p, you ought not to believe/do p
2. If you ought not to believe/do p, it's not permissible for you to believe/do p
3. If it is not good (i.e., bad) for you to believe/do p, it's not permissible for you to believe/do p.
4. If it is permissible for you to believe/do p, it is good for you to believe/do p.

Premises 1 and 2 strike me as definitely true, premise 3 seems probably false (e.g., permissible but suboptimal actions), and premise 4 is definitely false. I'm not proposing any diagnosis of how the argument goes wrong, just a reductio.

Hi Ralph,

Not sure this is the kind of think that your targets have in mind, but I wonder how your view applies to cases in which thinkers have compelling practical reasons to believe against theoretical reason.

It seems that Joe could have compelling practical reason to use his powers of reasoning and thought in "improper" fashion or fail to exercise those powers "properly". If so that would cast doubt on

7. If you are rationally required not to believe p, you ought not to believe p.

Thanks, Derek!

Your reduction fails, for the following reason. The term ‘good’ is (as a long line of moral philosophers from G. H. von Wright to P. F. Geach to J. J. Thomson have argued) a multiply context-sensitive and polysemous term. However, it is fairly easy to see that every sense of ‘good’ either (a) is equivalent to some corresponding sense of ‘permissible’, or else (b) invalidates one of the premises of the argument that you claim to be analogous to my argument about ‘rational’.

After all, being permissible is clearly a good feature of an action or an attitude – it’s certainly not a bad feature, and so it is presumably to at least some degree a good feature. So actions or attitudes that are permissible are all at least in one way good.

Alternatively, consider stronger senses of the term ‘good’ – senses in which being permissible is not sufficient for being “good”. Supererogatory actions are a good example of actions that are good in such a stronger sense.

But then premise (1) of your argument is obviously false. Even if doing X is not good in the way in which heroic or saintly supererogatory actions are, we certainly cannot conclude that you ought not to do X. In such a case, doing X may be quite permissible, even if it is not in the relevant way “good”.

So, in general, I do not believe that you have successfully identified a sense of ‘good’ which both (a) is stronger than the corresponding sense of ‘permissible’, and also (b) validates all the premises of your argument.

Thanks, Brad!

The problem that you raise is familiar from the literature about the normativity of rationality. (See e.g. Andrew Reisner’s 2011 paper ““Is there reason to be theoretically rational?”)

The argument of my blog post was only addressed to philosophers who think that ‘rationally required’ expresses a normative concept. Any philosopher who thinks this is going to have to address this problem. Presumably, they will do this by arguing that there are different senses of ‘ought’ – including (a) a sense in which you “ought” to believe what you have compelling practical reasons for bringing it about that you believe, and (b) a second sense in which you “ought” to believe only what you have compelling theoretical reasons to believe.

At all events, if ‘being rationally required to believe p’ does express a normative concept, then it surely expresses a stronger concept than merely the concept of having some reason to believe p: it expresses the concept of having compelling reasons of some kind for believing p.

I'm persuaded by your argument that (1) the permissibility of a belief is sufficient for the belief’s rationality, but I still feel like (2) saying a belief is "rational" is saying more than just that it's permissible.

What if all permissible beliefs are also obligatory? Then (1) and (2) would both be true, right?

According to some consequentialist moral theories, all morally permissible actions are also obligatory. And you could construct a proof that, if one of these theories is true, then if an action is permissible, it's "morally right." But that wouldn't mean that, on these theories, to say that something's "morally right" is merely to say that it's permissible--would it?

Thanks, Mike!

You are quite right that if all permissible beliefs are obligatory, then (1) and (2) would both be true -- even though it would then also be true that whenever it was rational for you to believe p, you ought to believe p. (It would not follow that 'rational' means "obligatory", of course -- no more than it would follow that 'permissible' actually means "obligatory"...!)

As for the puzzle about 'right' that you raise at the end of your comment, I said something about this in an earlier post on this very blog. See:
http://peasoup.typepad.com/peasoup/2008/05/a-puzzle-about.html

I disagree with much of this. I'll pick at two places:

"1. If it is not rational for you to believe p, you ought not to believe p."
This would only be true if there were no other normative pressures on people. The example that jumps to mind is Kirk saving Spock in the Star Trek movies. Spock does not expect to be saved, because it would be irrational. Kirk agrees, but he is affected by a different normative pressure in addition to rationality: friendship. I think that other normative pressures do in fact exist, and it would be an improper use of the word "rational" to try to extend it so that it covered them.

"5. If it is not rational for you to believe p, it is irrational for you to believe p."
This seems straightforwardly wrong in cases where we do not know. I don't know whether my mother is gardening at this moment. It's a fair bet on a summer evening, but I don't know. It would be neither rational nor irrational to believe that she is gardening right now.
You may be tempted to say that such a belief would be irrational because it is imperfectly supported. But in life we are often required to make decisions/act on the basis of imperfect knowledge. While it is theoretically possible to hold in one's head the uncertainty of our knowledge, in fact it is pretty hard for most human beings to maintain that dissonance between 70/30 knowledge and 100/0 action. It is often pragmatically (or consequentially) better to simply choose to believe things about which we have imperfect knowledge.

Posit that “X ought to phi” is normatively equivalent to “there is a reason for X to phi”. And that “rational” means “there is a reason”.
In Step 1., “not rational to believe” could have a positive meaning ("a reason not to believe") or a negative meaning (no reason to believe"). So Step 1 is ambiguous as between:
1* If there is a reason not to believe that p, then there is a reason not to believe that p.
1** If there is no reason to believe that p, then there is a reason not to believe that p.
1* is a tautology. You might mean 1**. Step 1** is, on its face, somewhat controversial. The fact there is no reason to believe that p does not entail that there is a reason not to believe that p. In addition, consider step 2, which states that:

If there is a reason not to believe that p, then it is not permissible to believe that p.

Given the nature of hypothetical syllogism (distribution) and contraposition, we ought to be able to do a contraposition on each step, 1 and 2, and then combine them through hypothetical syllogism to produce your step 4. But (unsurprisingly given the meaning of “ought” and “permission”) step 2, by contraposition would give us the statement that:
3*: If it is permissible to believe that p, then there is no reason not to believe that not p.
Fine and dandy. That’s just what permissions mean. But 3* doesn't give us Step 4, for much the same reasons as I mentioned above. So because of the ambiguity contained in the positive and negative meanings of "not rational," I think your argument conceals the inference:
3** if it is permissible to believe that p, then there is a reason to believe that p.
Step 4 would then be tautologous from step 3**. But 3** is not what step 2 gives us by contraposition. So the hypothetical syllogism won't go through.

Phil --


  1. You may be being misled by my terminology here. I'm not using the term 'rational' so that any decision that is influenced by friendship or by emotion is "irrational". What it is rational for me to do may depend on any of the "normative pressures" that matter to me. So, if friendship really matters to Kirk, it is perfectly rational for Kirk to be influenced by friendship in making decisions. On the other hand, if Kirk is taking an absolutely crazy risk -- risking not just his own life but the lives of his whole crew -- in order to save Spock, then surely there is a sense in which it is true that he "ought not" to have saved Spock!

  2. I don't see any reason to think that there are any cases in which it is, as you say, "neither rational nor irrational" to believe p. To take your example, if there's only a 50% chance that your mother is gardening right now, then it is clearly irrational for you to have a confident belief that you mother is gardening right now. On the other hand, if the chances that she is gardening are sufficiently high (95%, say), it is presumably rational to believe that she is gardening. In between these extremes, there may be cases where it is rational to believe that she is gardening, and also rational to suspend judgment about whether or not she is gardening. But I see absolutely no reason for thinking that there are any cases in which it is, "neither rational nor irrational" to have this belief.

Eric --

Well, I'm not willing to "posit" that "X ought to phi" is normatively equivalent to "There is a reason for X to phi"! There are at least two crucial differences between the two statements.

  1. Like most formal semanticists, I believe that 'ought' is a weak necessity modal, and so obeys the principles of standard deontic logic: If p entails q, then "Ought(p)" entails "Ought(q)". So, even if you ought to phi, it doesn't follow that there is any reason in favour of your phi-ing. It could be that the reason why you ought to phi is that there is a reason in favour of your psi-ing, and your psi-ing entails your phi-ing.
  2. On the other hand, a reasons can conflict, while if 'ought' is used in the same sense on both occasions, it cannot be that you "ought to phi" and also "ought to psi" if it is impossible to both phi and psi. It will be true that you ought to phi if and only if phi-ing is entailed by what you have decisive (or compelling or overriding) reason to do -- merely having some reason to phi is not enough!
So I don't accept the assumptions that your objection is based on. But even if I did, I find it hard to see how it could be simultaneously the case that there is no reason to believe p, but also no reason not to believe p. Surely, there is always a reason not to believe things that there is no reason to believe? Actions may well be different: perhaps sometimes there can be no reason to click your fingers and also no reason not click your fingers. But beliefs are surely different, if there's absolutely no reason to believe something -- e.g. that the number of books in the British Library is odd -- there is also a reason to refrain from believing it!

Ralph: thank you for the reply.

Some of what you say seems to support my points:
"surely there is a sense in which it is true that he "ought not" to have saved Spock!"
Absolutely - hence the nature of the moral dilemma. The first thing that Spock says when he is rescued is something like: you shouldn't have rescued me. And viewers of the movie are supposed (I believe) to feel the force of that argument - and to feel the force of Kirk's rejection of it as well. This is just the point I made about more than one normative pressure. You attempt to extend the meaning of the word "rational" here so that it incorporates all normative factors affecting a decision/belief. I think that is an illegitimate move. We recognise situations in which strong individual feelings - even strong individual moral feelings - conflict with "cold, hard facts". The word "rational" has always meant something like "based on cold hard facts (not strong feelings)," and it is not cool to try to extend the meaning of the word "rational" so that it encompasses those strong individual feelings as well. That's just not how the word is used in non-philosophical discourse; of course, you can redefine it if you want, but you'll lose the buy-in of ordinary English speakers.

(I also think it's an illegitimate move because it implies some kind of reasoned process of weighing up different normative pressures, and I suspect that some kinds of normative pressures are incommensurable, . But I don't think that point is necessary for my argument about language usage.)

Some just doesn't seem very realistic. The way you envisage rationality/irrationality working doesn't seem very true to my linguistic/life experience. So for all percentages above, say, 70%, it is rational to believe X, then suddenly at 69% it is irrational to believe X? With an overlay of "also rational to suspend judgment" overlapping both? This just doesn't seem to me to be how the words rational and irrational work. This seems like an example of the pretty common problem of philosophers wanting to define words in terms of binary features for ease of logical manipulation; and real language using a whole range of definition approaches: exemplars, oppositions, generic similarities. But even if we ignore what natural language does with these words, I can't see any reasoned basis for allowing any percentage of uncertainty in our rational beliefs. Why is it rational to believe something which I know is only 95% likely to be true?

There's also the problem that you seem to be defining rationality purely on the basis of how well supported a belief is. What is rational may also be affected by our intentions - I guess Pascal's wager must be the classic example, but I'm sure there are better ones.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


Disclaimer

  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.