Some Of Our Books


« Getting clear on 'sophisticated consequentialism' | Main | Ethics Discussions at PEA Soup: Rabinowicz and Ronnow-Rasmussen on Way »

June 28, 2012


Feed You can follow this conversation by subscribing to the comment feed for this post.

So here's question I have. The summary of Mark's paper by our commentators says:

He suggests that there can be state-given right-kind reasons for attitudes and not just such reasons against them. This holds at least for intentions (though not for beliefs).

I'm wondering why the Schroeder account doesn't allow for state-given reasons for belief. Here's what I think might be an example. I'm aware that there is research that shows that in person interviews give one evidence about job candidates, but that people tend to weight it to heavily. I could imagine thinking that this made it rational to form one's beliefs about which candidate is best, before the interview stage. And the reason this is a good idea is that the settled nature of belief might make any subsequent beliefs less influenced by evidence one might otherwise overweight. It isn't that you couldn't change your mind in the face of really strong evidence, it is just you'd be less likely to form a false belief on the basis of weak but inconclusive evidence that you put too much weight on.

In the way I imagine the case, it is the epistemic value of having the belief one does that gives one a reason to form it now rather than waiting to make up one's mind. So it seems to me to be the kind of thing one might mean if you say the right kinds of reasons must relate to the distinctive nature or point of belief. And leaving the positive theoretical proposal aside for a moment, it also intuitively seems to me to be a right kind of reason for belief.

I can imagine someone objecting that it can't be of the right sort because it can't be your whole reason to believe what you do -- you'd also have to have evidence for it. But I'm thinking reasons don't have to be your whole justification to be reasons, they only have to be part of the story of why, together with other applicable reasons, you are justified in doing what they favor.

I throw this out without having gone back to check whether there's an answer in Mark's paper that is the subject of the discussion. Since this is a blog discussion I take it that's OK. Anyway, I was wondering about this.

Many thanks to Mark Schroeder, for his paper, and to Wlodek Rabinowicz and Toni Ronnow-Rasmussen, for their précis.

Since Schroeder’s paper takes my own view as one of its targets, I have spent some time working through it. In the process, I ended up writing a paper. I’ve posted a draft of that paper here, for those who are interested (comments on it are welcome, citations on it are as yet incomplete).

My main concern with Schroeder’s argument connects to the first question raised by Rabinowicz and Ronnow-Rasmussen. I, too, am concerned with Schroeder’s reliance on earmarks. I suspect it is a fatal flaw in Schroeder’s argumentative strategy.

The basic structure of Schroeder’s argument is as follows: He first collects four “earmarks” of the distinction between reasons of the right and wrong kind. He then locates cases in which those earmarks are present, but in which they distinguish among reasons that are not object-given. He concludes that any account of the right and wrong kind of reason that relies on the object-/state-given distinction is incorrect. (The actual argument is more complex. I treat it in more detail in the paper linked above.)

As Schroeder acknowledges, the argument relies on the claim that “if [the reasons that appear in his cases] bear all of the marks of right-kind reasons, they are right-kind reasons.” (p. 466) (He elaborates, “after all, the ‘right-kind'/‘wrong-kind' distinction was just a catch-all label designed to cover an important class of differences that arise in a variety of domains.” [p. 466]) This is an instance, I take it, of what Schroeder later calls “a key methodological principle: if it quacks like a duck, it’s a duck.” (p. 480) This principle appears in other places in Schroeder’s work. I am surprised whenever I met with it. It is not true. Even of ducks.

At the beginning of an inquiry, we may use a word as a catch-all label for a set of earmarks. At that stage, we may have no better way of identifying ducks, or kinds of reasons, than by the earmarks. (We might then assume, justifiably but defeasibly, that all that quacks is a duck.) However, by the end of the inquiry, we hope to have an account. If the account is a good one, it may allow us to discount certain apparent cases as only apparent—the account may force a reclassification of things that, we admit, bear the earmarks. (Once we have an account of what it is to be a duck, we might deny the classification to certain geese, or to robots at Disneyland, while admitting that they quack just like ducks.)

So, in a nutshell, the flaw I think I see in the argument is that, once an account of some phenomenon has been proposed, we cannot simply rely on the initial earmarks of the phenomenon, the original symptoms that guided inquiry, to claim that the account is incorrect. Accounts, if they are good, show what it is that explains the earmarks. With an account in hand, we often come to see that certain cases in which the earmarks appear are, in fact, not cases of the thing for which we have provided an account. The account, if otherwise good, can force a reclassification of such cases. So the fact that other things quack is, in itself, no criticism. That other things quack would be a criticism only if the account itself either asserts that what it is to be a duck is to quack or else somehow implies that all quacking things are ducks. To determine that, we need to look at the details of the account. We cannot depend on the initial earmarks.

In the paper linked above (and here), I re-present the details of my own that account (from “The Wrong Kind of Reason,” JPhil 2005) and consider how it will handle the cases Schroeder presents. I argue that it can handle the case that may seem to cause trouble for it—that, even though the reasons in that case may quack, they are still the wrong kind of reason.

Schroeder’s other cases are interesting and challenging, and I spend some time on them as well. At the end of the paper, I reflect a bit on the very different approach to reasons taken by Schroeder and myself. I think the most interesting issues lie there.

Wow - thanks to Toni and Wlodek for getting this discussion started, to Mark, and especially to Pamela for engaging so seriously with my paper! I'll need to digest her full paper over the next 24 hours (or much longer), but let me try to start to address some of the main questions so far. Hopefully my remarks will make even more transparent than it already is where the holes in my paper are, so that others will continue to jump in to the discussion.

Three out of Wlodek and Toni's four questions, as well as Pamela's response, are related to my 'earmark' strategy for keeping track of where it looks like we can find something that looks like the same kind of thing that the distinction between 'right' and 'wrong' kinds of reason was supposed to track. Wlodek and Toni start by asking for clarification about the role of these earmarks, and end by suggesting that I might think they are so strongly connected to the phenomenon that they mark out, that they might serve as an analysis of it. In between, they introduce a case that starts to test the neatness of how the earmarks line up. And Pamela, of course, suggests that earmarks cannot do nearly as much for me as I would like.

The role that earmarks are supposed to play is supposed to be much more like Pamela describes than like Wlodek and Toni's remarks suggest. They don't and are not intended to provide any sort of analysis, and I was even deliberately vague about exactly what their standards of application are, relying on the hope that the cases about which I needed judgments are clear enough, regardless of how some other cases are resolved. For example, in my 'asymmetry of motivation' earmark, I was careful to avoid saying exactly what sort or how strong an asymmetry in motivation is involved. So I hope that helps to begin to addres Wlodek and Toni's first question. Moreover, in keeping with the spirit of this approach, I'd also like to adopt some of Pamela's remarks about the limits of earmarks in response to Wlodek and Toni's second question.

That leaves me with the question of why I see my cases as ones that should push us to realize that what was most interesting about the right/wrong distinction is a broader phenomenon, rather than as cases that should be seen as casualties of fortune to be explained away, once we realize that they are not captured by our account. Pamela rightly holds me to task for the literal truth of "if it quacks like a duck, it's a duck" - after all, some things that quack like a duck are mechanical hunting decoys. But I do think that my cases put pressure on us to be seriously asking: why isn't the most fundamental, interesting thing that the earmarks initially led us to notice and of which we originaly wanted to give an account, present in these cases? Isn't there something interesting and important here that all of these cases share? And isn't it important when we want to know which reasons bear on the rationality of these attitudes?

I hope to try to say more about this later, if there is sufficient discussion, and I'll comment about Wlodek and Toni's third and fourth questions tomorrow. But I should close by saying that I particularly like Mark van Roojen's example. I began with the idea that there would be right-kind state-given reasons for belief as well as against it (I rely on this idea in a paper forthcoming in Phil Studies), but I talked myself out of this after worrying that with belief, unlike with intention, it isn't possible to 'force the question', because you can always act on the basis of your subjective probabilities, even if you remain ultimately agnostic in binary terms. However, Mark presents us with a case in which the question is forced not because the time to act has come, but because we have strong evidence that our future selves are not to be trusted. It's a really great case, and I'm strongly tempted to agree with him about it.

I want to float a worry about overgeneralizing on a small set of cases. Mark has this worry when one identifies the right-wrong distinction and the object given-state given distinction. I’m wondering if his argument overgeneralizes when denying the identity. Mark's cases are about the rationality or the correctness of beliefs and intentions, and from that we are to conclude that the two distinctions above are not the same. To support that conclusion, though, I think we’d like two more things. First, we’d like cases analogous to Mark's that address state-given reasons that are evaluative-status-making (e.g., concerning the desirable, the admirable, etc.), and other statuses where we are tempted to analyze them in terms of right reasons. Maybe those are to be found elsewhere, but I think Mark needs them for the argument (or would like them, just to avoid explaining why his examples only come up for certain statuses), and they strike me as much more controversial. For example, I doubt there are state given reasons not to be amused by a joke that help to make the joke genuinely unfunny.

Another thing we’d need is some motivation for analyzing correctness or rationality in terms of reasons. I think one main attraction to fitting attitude analyses for things like admirability is that “x is admirable” entails “someone has reason to admire x”. I doubt there are similar entailments from “x is [the] correct [way to tie a knot]” to “someone has reason to [tie a knot] x[ly]” or from “belief B is [would be?] correct [true?]” to “someone has reason to have B.” There are analyses of rationality that are not given in terms of genuine reasons (as opposed to apparent reasons, e.g. – maybe we need to distinguish senses of rationality here) that I at least find plausible, and if the correctness of belief has to do with truth it’s not clear how this will connect up to reasons.

Basically, I’m floating the idea that the wrong kind of reasons problem does not arise for the statuses Mark discusses because a reason-based analysis of those statuses is not the way to go. Maybe Mark's cases plus the plausibility of identifying the right-wrong distinction with the state given-object given distinction in a broad range of other cases further pushes us in this direction? That is not to say there are not interesting distinctions to make concerning reasons to believe and reasons to intend that Mark has brought to light with cases. Just that they might not bear on wrong-right distinctions needed to analyze certain evaluative and normative statuses in terms of reasons.

I'm pretty sympathetic to Mark's main claim - that there can be right-kind reasons against belief and intention which are not object-given. But I have a question about the suggestion that there can also be right-kind reasons for intention which are not object-given. Mark's suggestion is that, e.g. the fact that you need to coordinate with your wife about who will have the car tomorrow is a reason to make up your mind about whether to go to LA, and thus a reason to intend to go to LA. But I'm not sure why the last bit follows. Why not think that the need to co-ordinate is a reason (of the right kind) against having neither intention, and so perhaps a reason to [intend to go to LA or intend not to go to LA] but not a reason to intend to go to LA? It seems like this suggestion would be enough to explain why this consideration makes it rational to intend to go to LA (since having neither intention is an alternative to intending to go to LA). But it wouldn't require us to give up the idea that all right-kind reasons to intend to A are reasons to A. (A similar move could be made in response to Mark van Roojen's example).

Matt - I don't think your standard of evidence for whether something should be analyzed in terms of reasons is adequate. Take 'good torturer'. I don't think the fact that someone is a good torturer entails in and of itself that anyone has a reason to have any positive attitude toward her, but I think even the attributive 'good' should be analyzed in terms of reasons. The analysis just shouldn't entail that there needs to be anyone with those reasons - it could say, for example, that there is a reason for anyone who is in the market for a torturer to select her. That only entails that anyone has the requisite reasons, assuming that there is in fact someone in the market for a torturer (which there may not be).

Jonathan - it's true that the reason in question counts equally in favor of both intentions. And I don't have any examples that aren't like this, so I agree that you can't get right-kind reasons for intention or belief willy-nilly. If we adopt contrastivism about reasons, we could say that it is a reason to intend to go rather than have no intention, but not a reason to intend to go rather than intend not to go. But I think there is a natural sense in which this reason really does count in favor of each intention, because in serving to make the state of either-intending-to-go-or-intending-not-to-go more rational, it has to also make each intention more rational.

Thanks, Mark,
I didn’t mean to suggest a standard of evidence for reasons analyses. Existential reason entailment is one big motivation for me to analyze something in terms of reasons (with supporting considerations), but there could be others. Are you suggesting entailment of conditional reason claims as an alternative? I’m not sure about the example – I don’t think that anyone in the market for a torturer has some reason to pick a good one, but maybe clarification on being in the market for something would help. That might get into your other work.

Example aside, then, I’m worried more generally that entailment of conditional reason claims is too inclusive a standard, even conditionals concerning people in the market for things. I suppose the fact that x is the only sharp knife entails that anyone who is “in the market for” a sharp knife has reason to select x (is that right?). But that does not seem to be a motivation to analyze the property of being the only sharp knife in terms of reasons. As you might guess, I have similar views on attributive uses of ‘good.’ I’m not tempted to give reason-based analyses because they entail no existential reason claims and I don't yet see another reason for reason-based analyses here.

Assuming I’m wrong about the above, there is this residual worry I have if rationality and correctness are the only examples we use for state-given reasons. Why shouldn’t there be similar examples for evaluative properties that are most tempting candidates for reason-based analyses and where state-object distinctions seem to do pretty well to divide wrong and right reasons? Maybe you think such examples discussed by others are good?


I don't think entailing conditionals about reasons is grounds to analyze something in terms of reasons, but I do think that uniformity in treatment of 'good' and other evaluatives, as well as general grounds for thinking that what all normative properties and relations have in common is that they involve reasons, are grounds to seek to analyze attributive good in terms of reasons.

The question about whether there are state-given right-kind reasons against admiration, desire, or fear is a good one, as well as that of why not, if not. I don't think I have a good answer to give you, offhand, but I would start, thinking within my view, but trying to think about the rational role of those attitudes. If both the costs and benefits of admiring Jackie turn essentially on features of Jackie, that is the kind of thing that would lead us to expect, on my view, that right-kind reasons both for and against admiring Jackie would have to somehow mention or relate to Jackie, and hence explain what I think you want to explain. Intention is different because some of the costs and benefits turn on our need to have some things settled (in order to act and plan around those things) while leaving other things open.

I should still weigh in on Wlodek and Toni's third and fourth questions. Let me start with the third, which is designed to push on the sketch in the direction of a positive account of the right/wrong distinction that I give in the paper. I should start by clarifying that I don't think that the remarks that I give in the paper suffice for a full-fledged account. I don't think they are precise enough to take a stand on important possible distinctions, so I take the remarks to be more of a gesture in the direction of the kind of account that I think must be right, rather than the articulation of a particular such account.

The main reason for this is that I did, in fact, defend a particular account of the right/wrong distinction, both in "Value and the Wrong Kind of Reason" and (in a slightly earlier and less adequate version) in chapter 7 of Slaves of the Passions. I no longer think that this particular account works, but I do think that it belongs to the family of views that I still take to be correct - views that tie right-kind reasons for and against each attitude to the distinctive nature of that kind of attitude. In the course of writing "The Ubiquity of State-Given Reasons", I at one time thought that I had a replacement account that would do the job, but someone - I believe it was Pamela, at a conference in Austin - convinced me that it didn't fit the example I use in the paper of deciding where to go to grad school at a time when more current matters are pressing. So I gave up, for now, on defending an exact account, in favor of pointing to which broad sort of account I think needs to be right.

(It's worth pointing out, in passing, that although I took Pamela's account to fall under the scope of views which might have a problem with my argument in the paper, it does share with the family of views where I think correct account is going to be found, the insight that the right kind of reasons for each attitude are going to turn on something about the nature of that attitude.)

In any case, my original account was explicitly designed to deal with examples like Wlodek and Toni's. The idea of that account was that right-kind reasons are relative to an activity, and with respect to any activity, the right-kind reasons are the ones that are distinctive of that activity. And on my original idea, to be a distinctive reason of some activity, is to be a reason that would be shared by necessarily anyone engaged in that activity. This rules out cases like Wlodek and Toni's, but is too strong. In "Value and.." I weakened the condition, but weakened it too much, for reasons that will be obvious to readers of Jonathan Way's paper.

I still think that right-kind reasons are relative to an activity, and that they are the reasons that are distinctive of the activity. But I now think that looking at who shares the reasons is too coarse-grained to distinguish the right kind from the wrong. Yet the nature of the activity has to somehow tell us which reasons are of the right kind, and which are of the wrong - the right kind must somehow be more intimately connected to the distinctive nature of that activity. Or, since we are talking about attitudes, the right-kind reasons must be more intimately connected to the nature or role of each attitude. It's clear that evidential reasons for belief are more closely connected to the role of belief than Pascalian considerations are, and I believe the same goes for stakes-related reasons against belief. I also think it's clear that ordinary reasons for intention are more closely connected to the role of intention than Kavkaian reasons are, and I think the same thing goes for forthcoming information as a reason against intention. Finally, I think the reason that it is so clear that Wlodek and Toni's example involves a wrong-kind reason for intention, is that it is clear that it is not sufficiently closely related to the role of intention. Now, I can't put my finger on exactly why not, but that's where I'm stuck, right now.

Last, let me say something about Wlodek and Toni's last question, which is whether my talk about 'costs' and 'benefits' in the paper means that I can't defend a Fitting Attitudes account of value concepts like 'good', 'admirable', 'praiseworthy', and so on. The answer is that I don't think it does; perhaps 'cost' and 'benefit' weren't the best words for me to use, but I don't think that we should want to give FA-style accounts of everything that has the sort of evaluative tinge that allows us to recognize it as involving either a 'plus' or a 'minus'. After all, there are both reasons for and reasons against, and we're not going to give a FA account of reasons in terms of reasons. I know that my remarks in the final part of my paper are suggestive rather than well-spelled-out, but I would want the details to be filled in in such a way as to only appeal, when strictly construed, to such notions. Long live Fitting Attitudes!

Thanks Mark,

for some really clarifying comments. I am now off to the ASSOS conference.

Mark's cases are very interesting. As Justin has pointed out in his comment in the Way thread, we've long been concerned with some interesting cases of a different sort, which we think are WKRs but are controversial. We're both puzzling over what to make of these new Schroeder-reasons: reasons to make up one's mind (or not to), which Mark thinks bear all the earmarks of RKRs but are not object-given and, hence, do not bear on the underlying judgment of what to do or what is the case -- which one might have thought was the essence of the distinction between right and wrong kinds of reason.

But we've got some questions about Schroeder-reasons. In particular, we think it is not so clear that his reasons against intention behave differently from obvious WKRs such as demonic incentives with respect to whether one can follow them. That is, we're not yet persuaded of what Mark calls motivational asymmetry. That is only one of his earmarks, but it seems most important, since the claims that his reasons bear any of the other earmarks depend on some intuitions about rationality and flavor that we may not share, and Mark grants that the correctness earmark is controversial. So if Schroeder-reasons behave like WKRs with respect to motivational efficacy, that would be grounds for thinking that they are not RKRs for (or against) intention after all. (Instead, perhaps, they are WKRS against intending, or perhaps they are not reasons against any particular intentions about going or staying at all, but only reasons not to make up one's mind now.) We will focus on the intention cases and the reasons not to make up your mind, but we think that the same point applies to the reasons to make up your mind now, and to belief as well as intention.

Here's why they seem to behave like incentives. Schroeder-reasons seem to depend crucially on the comparative weights of the object-given reasons (with one caveat, to be discussed). To see this, let's differentiate between close calls and no-brainers. Close calls are decisions where real deliberation needs to occur in order to figure out what you have most reason to do; no-brainers are cases where as soon as you understand the scenario, you see that you have more reason to do A than B.

Consider Mark's second kind of case for not making up your mind: that there are more pressing matters (the alarm, etc.) that must be attended to immediately. In his example, which graduate program to attend is a close call. If it had been a no-brainer -- say a decision between Old Ivy and Mediocre State -- then as soon as you took in the choice situation, your mind would be made up. Which means that despite the pressing matters you need to attend to right away, you couldn't put off making up your mind -- if only because in an obvious sense there would be no (mental) act of making up your mind necessary or even possible; your mind would be made up as soon as you took in the choice situation. These cases behave just like classic WKRs, it seems: you can't decide not to make up your mind, precisely because it's obvious what you have most (object-given) reason to do. But if it's not a no-brainer, then you can postpone deliberation either for Schroeder-reasons or for the reasons given by incentives.

Mark's first class of cases, involving future information, are trickier. Here the previously mentioned caveat arises. Even when the object-given reasons to do A or B is not a close call, if you think that you will be getting information that may be so weighty as to swing the balance in the other direction, then you can (and rationally should) postpone making up your mind. What this case has in common with close calls is that some future mental act (either of deliberation or immediate change of mind) will be required; that makes it possible to refrain from making up your mind. So the disanalogy with WKRs on motivational asymmetry does not seem to apply to close calls or swing the balance cases. Whenever deliberation about what to do is needed, it seems that one can also follow WKRs to not making up your mind, and that it is rational to do so when the incentive is sufficiently large.

So we're left puzzling about Mark's fascinating and important sort of cases, but the issues raised are complex and we'd like to hear more from Mark about these points.

Dan (and Justin, I take it) seem to be offering different challenges to each of the kinds of example that I offer in the paper. This is helpful, because the second sort of example feels a little different from the first, to me, and I couldn't make it fit with the initial theory of the distinction that I was toying with for a while.

The challenge about my cases which involve current cognitive demands is this, as I understand it: every case is either a close call or a no-brainer. In no-brainer cases, there is no motivational asymmetry, because intention can never be put off either for reasons of cognitive demands or for demon-rewards, and in close call cases, there is no motivational asymmetry, because intention can easily be put off for both. The objection is that when I compared my cognitive demand case to a demon-reward-type case in the paper, I cheated by comparing a close call case to a no-brainer case, and so I wasn't really comparing apples to apples. This is a sharp point, and I'm not sure that I'm not guilty of having been persuaded by cases with precisely this feature.

However, I don't quite understand what the challenge for forthcoming information cases is supposed to be. Dan says, "if you think that you will be getting information that may be so weighty as to swing the balance in the other direction, then you can (and rationally should) postpone making up your mind." As a generalization, I think this is false. Sometimes, even when you expect to get information that may swing the balance, you should make up your mind anyway, because you have other decisions to make which require a settled answer as to the decision in question. Then Dan says something that I don't follow about how this is analogous to 'close call' cases, and concludes: "So the disanalogy with WRKs on motivational asymmetry does not seem to apply to close calls or swing the balance cases". The accusation seems to be that cases in which further information is forthcoming - my kind of cases - are by their very nature just like close call cases, and again I did not compare apples to apples when I compared them to my no-brainer demon-reward-type case.

But I don't see why we should think this is right. Future information cases can _be_ "no-brainer" cases. So if we're considering a future information case that is also a no-brainer case, we can compare the effects of demon-reward-type incentives to those of my kinds of reasons. An asymmetry of motivation that is exhibited in the very same case can't be triaged by classification of cases. In the paper, I consider a case in which I am 80% confident that my brother will be in LA, but that 80% chance is not worth the 100% chance of traffic, and I know that he will tell me later whether he will be there. I claim that the future information is a RKR against intending now not to go into LA tomorrow. I also claim that if I were offered money to make up my mind now anyway, that would be a paradigmatic WKR. And finally, I claim that there is an asymmetry of motivation in this case, because while it is easy to wait to make up my mind on account of the future information, it is difficult to make it up on account of the reward. My evidence for this is that this case has all of the same essential features of the original toxin puzzle case - I'm making a decision that I'm highly confident that I will reverse. Moreover, all of the essential features of the case can be preserved for much higher degrees of confidence. Since I don't see how to maintain an asymmetry of motivation in the original toxin puzzle case without allowing it in cases like this, I think it's clear that there is asymmetry of motivation in these cases.

Hi Mark, we agree that the future information cases seem importantly different from the current cognitive demand cases. The former feel much more like RKRs, though our intuitions may be driven by having thought much more about R/WKRs for evaluative attitudes. So if the future information cases really do exhibit a motivational asymmetry, I think I’m sold that they are RKRs, and I think that’s neat. (I haven’t had a chance to talk to Dan since your reply, so I will only speak for me but I expect he agrees.)
One question, though. You say “while it is easy to wait to make up my mind on account of the future information, it is difficult to make it up on account of the reward.” But that compares a Schroeder reason not to intend to an incentive to intend. But what about incentives not to intend? Wouldn’t that be the apples to apples comparison? It is not so clear to me that you couldn’t you postpone your decision for money in the case you consider above.

Thanks for the response, Mark. In case it's not too late, here's a short attempt to be more clear about the central case.

The central thought is that there are two possibilities: roughly, cases where (further) deliberation is warranted and cases where it isn't. These cases correspond, even more roughly, to what we called the Close Call and No-Brainer cases previously. I don't mean to put weight on the thought that this distinction is exhaustive, and I concede Mark's point (if I understand him correctly) that not all cases where deliberation is warranted are cases where it is rational to engage in that deliberation -- for instance, when further deliberation would be costly enough, you are rational to just plunk one way or the other.

But our idea is that when deliberation is warranted -- either because the object-given reasons make the decision a genuinely close call, or it just isn't evident which way the weight of those reasons lies (even if there's a fact of the matter), or if you know that future information might swing the balance (as in Mark's cases) -- then you can follow either Schroeder-reasons or classic, incentive WKRs to not making up your mind. Hence we don't yet see the motivational asymmetry.

Take a case where I'm just not sure whether the preponderance of object-given reasons favors doing A or B. Maybe if I think hard about it, I'll figure it out, but I haven't done that work yet. But now you come to me with a proposition: $1000 not to engage in that deliberation until tomorrow. Suppose too that there's no (obvious) cost to postponing my decision for a day. Then surely I can decide not to make up my mind yet in order to pocket the money, thereby following a WKR.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.