Some Of Our Books

Categories

« Parfit's Climbing the Mountain, Chapter 13: Conclusions | Main | ETHICS ALERT »

August 17, 2006

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Doug,

you ask whether TCR seems plausible,

TCR:
P has a reason to do x if and only if P has an object-given reason to desire that Ox obtains, where Ox is the total outcome resulting from P’s doing x.

I wonder if TCR is compatible with some vital distinctions; it may well be. (I will not comment Scanlon’s example). Here’s a line of thought:

Let’s distinguish a desired event E (say, the window opening), and an action A (you bringing it about that the window opens), which has further consequences C (say, wind moving your papers so that by chance you find the paper you’ve missed for a long time).

All these three aspects may have desirable and undesirable features. Action A might be ‘fitting’ in the situation, or have all sorts of virtuous and vicious features, or express fitting attitudes, or be a promise, over and above bringing E about. And C may be very desirable, although unlikely, and so on. (So it would be wrong to suggest that reasons to desire E+C are all there is to the matter. But TCR does not suggest that.) TCR takes it that the outcome O (or Ox) is the ‘sum’ of the event, action and consequences.

As formulated, TCR seems to presuppose only that you have some reason to desire that O obtains as soon as you have some reason to desire A, C, or E, whether or not this reason is overridden by other reasons. And then, if and only if there is some reason to desire O, there is some reason to do A.

Sounds right, although it might be too generous: if you want to find the missing paper, do you really have a reason to open the window and hope that by chance you’ll find the paper? Perhaps you do, a very small reason, if the likelihood is small.

An overall version of TCR (concerning sufficient reasons) might be more controversial:

TCRO:
P has sufficient reason to do x if and only if P has sufficient object-given reason to desire that Ox obtains, where Ox is the total outcome resulting from P’s doing x.

This might get things wrong. You might have sufficient reason to do some A (say, to express your condolences, or keep your promise by doing A, or refrain from murdering) even when there are so strong reasons to desire that E or C don’t obtain, that on balance you have insufficient reason to desire O.

Take a case where I have to murder one in order to prevent someone else from murdering five.
E1= one murder takes place
E2= five murders take place
A1 = I bring about E1, and prevent E2
A2 = I refrain from bringing about E1 and from preventing E2

In this case, it might be that there is sufficient reason to desire O1 (E1+A1), but nonetheless I have insufficient reason to do A1. That is, wrongness of murdering may make my reasons to do E1 insufficient, but badness of five murders may make my reasons to desire O2 insufficient.

(I suppose a defender of TCRO could respond by stating that reasons to desire outcomes are agent-relative, so that while everyone else only has sufficient reason to desire O2, I only have sufficient reasons to desire O1).

Arto,

Thanks for the interesting comments. You write,

As formulated, TCR seems to presuppose only that you have some reason to desire that O obtains as soon as you have some reason to desire A, C, or E, whether or not this reason is overridden by other reasons. And then, if and only if there is some reason to desire O, there is some reason to do A. Sounds right, although it might be too generous: if you want to find the missing paper, do you really have a reason to open the window and hope that by chance you’ll find the paper? Perhaps you do, a very small reason, if the likelihood is small.
Yes, that sounds right to me. If opening the window increases the chances that you'll find the paper, that fact counts in favor of your doing so and provides you with a (not necessarily sufficient or strong) reason to open the window.

Regarding TCRO, I neither accept nor reject it. It certainly doesn't follow from TCR, not that you suggested that it did. One could accept TCR, make a distinction between partial (or egoistic) reasons and impartial (or utilitarian) reasons, and hold, as Sidgwick did, that agents always have both sufficient reason to do what will make things go partially best and sufficient reason to do what will make things go impartially best. I think, then, that the issue of whether or not TCRO is plausible is separate from the issue of whether TCR is plausible.

In any case, you're right that on both TCR and TCRO there can be agent-relative reasons. And so, on TCR, I may have better reason to desire the total outcome that includes both my doing A2 and E2 taking place and thus better reason to do A2.

If you have any other thoughts, I would be interested in hearing them, as you did point out a number of interesting things that I hadn't thought about carefully enough.

Doug,

just to continue: If we distinguish E,A and C, as suggested, we can unpack TCR (roughly): P has a reason to do A, when (i) P has a reason to desire that P does A; or (ii) when P has a reason to desire that E happens; or (iii) when P has a reason to desire that C happens.

The clause (i) initially sounded like there’s something funny going on, but perhaps not: P’s doing A has some features which give P reasons both to do it, and to desire that P does it.
And the clause (iii) might lead to a very broad view of what we have some reason to do.

Concerning the latter, you write "if opening the window increases the chances that you'll find the paper, that fact counts in favor of your doing so". That formulation is helpful in thinking about a different case: how about a case where opening the window decreases the chances, but nonetheless might (though unlikely) lead you to find the paper? There is "a reason" for you to open the window, but more reason not to. I suppose that was the kind of case that I thought might make TCR too 'generous'

It seems to me like I can have a very good reason to do x without having an object-given reason (to do x) derived from Ox. My reasons for desiring Ox might be that Ox is better than any alternative outcome Oy, though there is nothing at all good about Ox.
Suppose S is dying from a debilitating disease and already in constant pain. S might have a reason to take pill P to slow down the rate at which his health debilitates and pain increases. I guess x would be taking pill P. The result of taking pill P is that S is less worse off than he would have been were he not to take P. But the outcome Ox of taking P is, in itself, bad. It offers no object-given reason for doing x. Still it seems pretty clear that I have a very good reason to do x (and to bring about Ox).

I am uncomfortable with this part of TCR:

P has a reason to do x if P has an object-given reason to desire that Ox obtains, where Ox is the total outcome resulting from P’s doing x

Consider an example:

Frank Sr. was abusive to Frank Jr. when Frank Jr. was growing up. Now Jr. is grown and Frank Sr. is old, but Jr. can't forgive him. Frank Sr.'s savings become depleted and he is on the street, with no support. Then, Frank Jr. is contacted by a family friend who is a lawyer and who says Frank Sr. has inherited some money - the friend asks if Frank Jr. knows how to find Frank Sr. (which he does).

I think Frank Jr. has (some) reason to desire that Frank Sr. suffer by living on the street and some reason to desire that he not inheret the money; he has some reason to desire to e.g. lie to the family friend. Each reason to desire is derived from the fact that he was abused by Frank Sr.

But I would deny that he has some reason to actually e.g. lie to the family friend.

In Scanlon's terminology, we might say: given the abuse, it seems to Frank Jr. that he has reason to cause Franks Sr. to suffer, and those apparent reasons are justified by the abuse he suffered, but those are, in fact, merely apparent reasons; the fact that X abused you is not a reason to lie to someone else to keep X from receiving an inheritance.

As far as I can tell, the part of TCR quoted entails denying this.

Arto,

I now think that this is what we should say about your opening the window case. Your opening the window will have certain (perhaps, unknown) consequences. Let's suppose that one of those consequences will be that you'll find your lost paper. If this is a consequence of your opening the window, then the fact that your opening the window will have this consequence (a consequence that you have an object-given reason to desire) constitutes a reason for you to open the window. Now you may not know that you have this reason. Indeed, given your epistemic situation, it may seem to you that opening the window will most likely make it more difficult for you to find the paper. But isn't this just a case, then, where you have a reason to do something that you are unaware of given your limited epistemic situation? That's what I'm inclined to say.

Mike,

In light of your comment, I suggest that we should accept:

TCR* P has better reason to do x than to do y if and only if P has better object-given reasons to prefer Ox to Oy.

Brad,

You write,

given the abuse, it seems to Frank Jr. that he has reason to cause Franks Sr. to suffer, and those apparent reasons are justified by the abuse he suffered, but those are, in fact, merely apparent reasons; the fact that X abused you is not a reason to lie to someone else to keep X from receiving an inheritance.

As far as I can tell, the part of TCR quoted entails denying this.

TCR, by itself, doesn't entail the denial of this. It entails this only if one assumes that Frank Jr. has a reason to desire that Frank Sr. suffers. But why should we assume this? We could say, as you might put it, that: "given the abuse, it seems to Frank Jr. that he has a reason to desire that Franks Sr. suffers, and it seems that those apparent reasons are justified by the abuse he suffered, but those are, in fact, merely apparent reasons; the fact that Frank Sr. abused Frank Jr. is not a reason for Frank Jr. to desire that Frank Sr. suffers." Reasons here refer to reasons of justification, not reasons of explanation. Clearly, the abuse would explain why Frank Jr. desires that Frank Sr. suffers, but why think that it justifies his desiring that Frank Sr. suffers?

A problem with TCR* may be that it sounds like it is comparing overall reasons.

A formulation that avoids the overall level might be something like:
P has a reason to do [x instead of y] if and only if P has an object-given reason to prefer Ox to Oy.

Doug,

about the window case, you're right to point out that I might not know all reasons (so relativity to knowledge, evidence or beliefs might be one way to go).

I suppose unforeseen consequences are never known at the time of action (when reasons to act are relevant). So another line to take could be the distinctions between actual and expected or probable (etc) utilities. I wonder if Ox ought to be modified to accommodate probabilities of consequnces? As it stands, it is about actual consequences only?

Arto,

An act's total outomce includes its actual consequences, not the consequences that some agent believes that it will have or likely have. I follow Parfit in thinking that what I have reason to do depends on the facts, whereas what it is rational for me to do depends on my beliefs. It is rational for me to do x when I have beliefs whose truth would give me sufficient reason to do x (see Parfit's Climbing the Mountain). Thus it is rationally permissible for me to open the window if and only if I have beliefs whose truth would give me sufficient reason to do so. I see no reason to introduce probabilities into TCR. Again, it seems to me that what I have reason to do depends on what the facts are, and we can account for the rationality of acting contrary to what an agent has most reason to do by appealing to the agent's beliefs.

Arto,

You say, "A problem with TCR* may be that it sounds like it is comparing overall reasons."

Why is this a problem?

Regardless, I think that I would accept your suggestion for how to state what one has a reason to do as opposed to what one has better reason to do overall.

Doug,

Right, I should have said "if that's a problem". It takes us back to something like TCRO, which might have a problem with cases like murdering one to prevent five murders.(but might not).

I agree with what you say about rationality and beliefs. But what about objective probabilities and knowledge about them (as opposed to knowledge about actual consequences)? Say, in cases where one kind of medicine M1 has minimal chances of killing and the other M2 has massive chances of killing? It seems like a problem to TCR, if it cannot accommodate the difference that objective probabilities make.

Arto,

I think that insofar as it's plausible to suppose that S has an agent-relative reason not to commit one murder even to prevent five others from each committing murder, it’s also plausible to suppose that S has an agent-relative reason to prefer the outcome where five others have each committed murder to the outcome where S has herself committed murder.

Regarding probabilities, I’m not convinced that there are any objective probabilities—-at least, I’m not convinced that there are any objective probabilities at the macro-level at which we act. But if I’m wrong, I suspect that TCR could be suitably revised, or at least that it will be no more difficult for TCR to accommodate objective probabilities than it will be for any other theory about practical reasons to do so.

Doug,

Right, I'd be more willing to accept objective probabilities. It might be that probabilities affect reasons to desire differently from reasons to act. We may have strong reasons to wish that very unlikely but very good things happen, but very weak reasons to do anything, given the slim chances. This may not violate the letter of TCR, as long as it is about there being "a reason" to act. (and it presumes there are obj.probabilities).

Here are two other types of cases however which drive a wedge between reasons to desire and reasons to act: we might have reasons to desire that E occur (say, get a favourable review, or get a birthday card), but it might spoil it to bring E about ourselves. So no reason to act, but a reason to desire.

And a more straightforward case is that we might desire that E occur, but be unable to do it ourselves. Again, no reason to act, but reason to desire.

TCR might prove immune to these as well,
but I'm not sure at the moment.
(It depends on how we describe the "outcomes of action": even if we have a reason to desire some birthday cards, we have no reason to desire a self-sent birthday card; and if we cannot bring E about, then E cannot be part of the outcome of our action. But then again, can't we have reasons to desire that we do something even when we cannot do that?)

Arto,

You make an interesting point about the objective probabilities affecting reasons for desiring and reasons for acting differently that I'll need to think about. But I think that TCR is immune to your latest worry, for TCR does not say that for any possible world W1, if S has a reason to desire W1, then S has a reason to act so as to produce W1. As you point, there may be possible worlds that we have reason to want to be actual but nothing that we can do to make them actual.

What TCR does say is take those acts that I can perform (say, x, y, and z) and compare the various resulting total outcomes (Ox, Oy, and Oz). I have better reason to perform x than to perform y if and only if I better reason to prefer Ox to Oy.

Doug,

Yes, it looks like TCR* nicely handles the problem.

TCR* P has better reason to do x than to do y if and only if P has better object-given reasons to prefer Ox to Oy.

I'm not sure whether it matters, but I think TCR* might also give us a reason to (in the Tom Crisp example you gave) eat mud. Outcomes in which S eats mud might be better than any other open to S, say if someone has a gun on S and (God knows why) insists on S eating mud. For my part, I think S does have a reason to eat mud in such a case, since the best available outcomes all have him eating mud. But maybe you don't want to be committed to that.

Doug,

You have my worry right.

TCR will be plausible only if we have in hand a conception of justificatory reasons for desires that rules out the thought that Frank Jr. justifiably desires that Frank Sr. suffer. I wonder what conception of j-reasons for desire you think will do the trick.

A couple of comments (the first continues the running line of thought, while the second two just occured to me):

(1) Following D'Arms and Jacobson's papers I think we should avoid adopting a moralized or ethicalized conception of *the* j-reasons for or against conative (or hybrid) states like desires and emotions. We need (at least) a three way distinction between considerations that explain a desire, those that justify thinking the desire is a fit response, and those that justify thinking it is morally or ethically acceptable, commendable, or whatever. I was thinking that the desire to see Frank Sr. suffer is "justified" in the sense that it is a fit response - that there is no reason to criticize it as an unfit response.

One way to cash out talk of fitness is by appeal to an evolutionary story about why we have desires of the relevant type (William Miller's new book, "An Eye for an Eye" also comes to mind.) I think D'Arms and Jacobson may favor this line but am not sure off-hand & I am not sure that this is the best way to start thinking about the notion of fitness, but it is one option. In any case, I think the D/J papers show the need to make the distinction.

(2) I am unclear what conception of desire you have in mind (e.g. phenomenological, dispositional, or attention-directed). If the latter you, of course, need a way of making sense of that which differs from Scanlon's, or TCR woould become circular (b/c he makes sense of att-directed desires, in part, in terms of apparent reasons to act.

(3) This brings a new thought to mind: Scanlon flirts with the idea that reasons for action are equivalent to reasons to intend, and that might suggest another way to understand his resistence TCR; he might resist by appeal to Bratman's arguments that intentions cannot be reduced to desires.

Brad,

Thanks very much for the interesting comments. I take your point and acknowledge that I need to think about the concerns that you've raised more carefully. That said, I think that this blog is just the place for the sort of off-the-cuff (and thus quite possibly wrong-headed) remarks that I’m about to make. First, I agree that where TCR (or TCR*) refers to reasons to desire, these are not exclusively reasons that morally justify such a response. As I understand it, TCR is about reasons generally, and one can certainly have a non-moral reason to do x or to desire that p without having any morally justifying reason to do x or to desire that p. That said, I’m not sure that what I have in mind is fittingness either. Desiring that p might be a fitting response even though there is no reason to desire that p. So I might want to question whether there are only three types of reasons to desire: (1) those that explain a desire, (2) those that justify thinking the desire is a fit response, and (3) those that justify thinking it is morally or ethically acceptable. What about those that justify thinking that it is rationally acceptable? If we talk about reasons to believe, would you think that there are only the first three? Perhaps, you think that to say that believing p is a fitting response is just to say that believing that p is rationally justified? I’m not sure either way. So I would suggest that that insofar as you think that there is some reason (the kind that makes it rational) for Frank Jr. to desire that Frank Sr. suffers, you should think that there is some (not necessarily sufficient) reason for his acting so as to make Frank Sr. suffer. I can after all have reason to do something immoral.

Second, if the evolutionary account is the proper account of fittingness, then I certainly don’t think that the fittingness of desires is what determines whether there is a normative reason to desire something. I don’t see how you get normativity out of this sort of descriptive account.

Well, I leave it at that for now. I’ll think about your points (2) and (3).

Mike,

I agree that S does have a reason to eat mud in such a case.

Arto,

Concerning objective probabilities, if there are such things, then there can be more than one outcome associated with a particular action. That is, the same particular action will quite possibly result in different possible outcomes, although the objective probabilities of each will not necessarily be the same. So suppose that if you open the window (i.e., do x), there are just three possibilities: the wind will shuffle around your papers so that you will find the paper that you’re looking for (1) much more quickly, (2) just as quickly, and (3) much less quickly. Call these respective outcomes Ox1, Ox2, and Ox3. Let’s suppose that if you do x, these are the objective probabilities associated with each possible outcome: .1(Ox1), .1(Ox2), and .8(Ox3). Now let’s call the set of possible outcomes along with their associated probabilities “Ox.” It seems to me that you do not have a reason to desire Ox (that set of outcomes with their attached probabilities) and thus TCR does not imply that you have a reason to do x. Thus TCR is not too generous.

Doug,

I hope this is indeed the place for off-the cuff remarks!

Here are some more:

(1) I think you are certainly right that there are more than three types of reasons to desire - that there are at least also "rationality" reasons to desire. We could also think about introducing reasons to think prudent, etc.

(2) Some remarks on "fitness":

I start to think of fitness of conative attitudes by turning to Foot's old argument about pride: If A is proud that she is X, then A must think that it is good for her to be X. Her original target was the idea that we could take up conative attitudes like pride towards just anything. But the fittingness idea comes in as follows. Given the conceptual connection btwn pride and thinking good for, we can say that A's being proud that she is X is fitting just in case X is actually (or is or could be rationally believed to be, etc.) good for A.

Thus if Bob is proud of having never read any of the assigned books for class (I had a friend like this in high school - no joke) his attitude is open to criticism for being unfit. D'Arms and Jacobson attack that account of pride (I don't think their criticism works...), but I hope this gets the idea of fitness across.


(3) TCR and a phenomenological account of desire might run into problems:

On the "if reasons to act then reasons to desire" half of TCR: I hate boiled brussel sprouts - I hate the smell and the taste. Imagine I can get over some poisen only by eating brussel sprouts. I force them down but all the while have a desire spit them out. I feel no desire to eat them, but I have a reason to eat them (a reason on which I act). Why think I have a reason to desire to eat them? Am I open to criticism for failing to have it? What kind of criticism?

Related problems seem to crop up for a dispositional-desire account if we think of the need to eat them as a one-shot deal.

(4) This all makes me curious: how do you make sense of the claim that a feature of an object gives us reason to desire it? Do you advert to the object being good or bad for people or sentiment creatures?

Doug,

I think we may disagree where reasons fall in that case, but that is not an objection to TCR/TCR*.

My view is that I would have a reason (that is, more than no reason) to desire Ox in that case. The reason is that Ox contains the non-zero possibility (Ox1) that I find the paper quicker than otherwise, which is desirable.

But then again, this seems to give me a small reason (more than no reason) to do it as well.


Brad,

You write,

On the "if reasons to act then reasons to desire" half of TCR: I hate boiled brussel sprouts - I hate the smell and the taste. Imagine I can get over some poisen only by eating brussel sprouts. I force them down but all the while have a desire spit them out. I feel no desire to eat them, but I have a reason to eat them (a reason on which I act). Why think I have a reason to desire to eat them?

So x is eating boiled Brussels sprouts, and Ox is the resulting outcome. It's clear that you have a reason to do x. Thus TCR implies that you better reason to prefer Ox to its alternative: Oy (let y = ~x). What reason do you have to prefer Ox to Oy? Answer: Ox is an outcome where you survive the poison, whereas Oy is an outcome where you don’t. I don’t see why you think that TCR implies that you have a reason to desire eating Brussels sprouts for its own sake, as you see to be suggesting.

You also ask,

This all makes me curious: how do you make sense of the claim that a feature of an object gives us reason to desire it? Do you advert to the object being good or bad for people or sentiment creatures?

No. I think that goodness just the purely formal, higher-order property of having other properties that provide us with reasons to desire. And I don’t have theory about what we have reason to desire, because I think that we’ve reached bedrock at this point.

Doug,

Oops. I forgot about the stipulation about the desire being for the whole outcomes and was thinking about reason to desire x (eating) rather than Ox. Sorry about that.

In any case, my worry might persist even given the shift to Ox.

I was thinking about a phenomenological account of desire - e.g. one on which desiring Ox involves feeling positive towards Ox (which I take it includes intrinsic features of x) or one on which desiring Ox involves feeling (or being disposed to feel) pleasure at the thought of Ox.

Now the argument: I do have reason to *choose* that I eat the sprouts as part of the outcome that involves me not dying, but even accepting that, I might be pained by the thought of eating them and pained by the thought of the whole outcome that involves that eating (despite thinking of that eating as a means to avoiding an even greater pain - or undesirable consequence - of death).

I would be choosing the lesser of two evils - say painful death by poison vs. eating the nasty sprouts but living on, and I can choose the better of two unpalatable options without thereby being commited to finding the option to be palatable. And I don't see anything objectionable about that.

Let me start with the confession that I haven't read Scanlon, so it is perfectly fair to answer this question by telling me to shut up until I have. RTFM, so to speak. Just from the recapitulation of the tennis example that has been given here, though, I'm very puzzled about this determination that there are no strong reasons for or against playing to win. On what basis is this determined? If it isn't determined on the basis of the possible outcomes of playing to win, or of not playing to win, then isn't the question being begged at this point?

Dale,

You ask, "I'm very puzzled about this determination that there are no strong reasons for or against playing to win. On what basis is this determined?"

It's just stipulated. By a strong reason for or against playing to win, Scanlon has in mind perhaps that you promised someone to or not to play to win. Absent such strong reasons, we're assume, then, that your reason to do what you would enjoy most is decisive.

Doug,

Thanks for the clarification, although this only makes it seem more likely here that the question is being begged against the teleological account. A non-teleological account of reasons is being assumed from the outset. Am I just completely off base here?

Dale,

How so? At this stage, it sounds like all the reasons that are teleological. The fact that playing to win would result in my experiencing more enjoyment sounds like a teleological reason to me. Maybe I'm not following you.

Doug,
I had in mind your example of a strong reason, that you promised someone not to play to win. Or does the fact that your having broken such of a promise would be part of the outcome of your playing to win make that a teleological reason, too?

Dale,

Yes. On Scanlon's view, the act's outcome includes the act itself and the fact that it constitutes a promise-breaking. Moreover, the teleologist's value theory, he allows, could be agent-relative such that my promise-breakings are worse relative to me than that of someone else.

Okay, this helps. I was construing teleological more narrowly, so of course it looked to me like Scanlon was assuming that there were non-teleological reasons. Even if Scanlon isn't going wrong in the way that I thought, he does seem to be going wrong in the way that you suggest. Thanks for clearing this up for me.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


Disclaimer

  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.