I am intellectually persuaded by the arguments for Consequentialism. However, like most people in that situation, by my own lights I fail to live up the demands of that moral theory by a wide margin. And again, like most in my situation I suspect, this is a source of disquiet but not persistent hand-wringing. But there is another moral view one might attribute to me. It is more deontological in tone. And this other moral code is connected much more directly to emotional reactions such as guilt and moralized anger. If others cheat in a business deal or steal (except in desperation) and I am close enough to the situation, I will likely have an engaged moral reaction to such a person. I will speak badly of them, refuse to hang with them, and think poorly of them. Yet the decently well-off person who fails to contribute much money to an effective charity does not elicit such reactions in me to a similar degree. Similarly, while I myself regularly fail to be governed by consequentialist morality in my actions or my emotional reactions to my or other’s actions, I am quite effectively governed in both my actions and my emotions to this other moral view. My conscience, let’s call it, effectively keeps me from doing a wide range of things such as lying, cheating, stealing, hurting and so on. In most cases I simply would not dream of doing such things and if I did somehow do some such thing (or even fear that I did) I would likely feel really bad about it. Such governance in deed and action would, if I believed in commonsense (more deontological) morality, pass for tolerable moral motivation.
We are used to wondering if a moral judgment necessarily motivates. This is the debate about judgment internalism. I don’t do research on this question, but for my money those who argue against judgment internalism, such as Svavarsdottir, are winning. But they win partly by positing the coherence of an amoralist—someone who makes sincere moral judgments but sees no reason to live their life by moral standards. I however will just ask you to take on faith that I am not an amoralist. Since the usual way the judgment internalist debate goes is to wonder whether a person necessarily has at least some (possibly quite small) motivation to act in conformity to their moral judgments, I likely pass this test anyway. But a stronger test seems appropriate for the role that morality plays in the life of someone who is not an amoralist. It strikes me as plausible to say that, in the life of a person who is stipulated to not be an amoralist, a case could be make that a person’s moral view is the one that governs in the right way their behavior and emotional life rather than the moral view that they argue for in journal articles. By that standard I am not a consequentalist and I have yet to meet many who would count as consequentialists by that standard. Let me try out the hypothesis that a person’s moral view is the one that governs their actions and emotions rather than the one that governs what they say and write in intellectual circumstances. If that is right, there are very few consequentialists out there. I don’t think this shows that consequentialism is defeated as a moral view, only that it is much harder to count as believing such a theory than we may have thought and harder to genuinely persuade someone of the truth of a moral proposition than we tend to suppose. (I should mention that several years ago Steve Wall, Dan Jacobson, and I talked about these issues and, it seemed to me, were all initially drawn to the conclusion I outline here. Some of the ideas presented here are perhaps as much theirs as mine.)
Mike,
Is Mike disposed to reason on the basis of the proposition that his wife deserves special treatment? Would he bet a large sum of money that that proposition is true? Does he often assert the proposition that his wife deserves special treatment? Etc.
Posted by: Doug Portmore | August 17, 2010 at 09:18 PM
David,
You say "I think I am acting wrongly, but I don’t get very worked up about that?" and don't think it worth "getting in other peoples faces about it".
Okay. The question then is there any morally wrong behavior that you *do* get "worked up" about?
If not what's the point of moralizing?
If so-- if there are some things that you think "worth getting in peoples faces about" -- then it seems to me you really should think about the difference is between what is worth getting worked up about what isn't. You don't have to call this-- as we non-consequentialist do -- "thinking about the difference between right and wrong", but you might find it an interesting exercise nevertheless.
But maybe the problem with us non-consequentialists is that we take all this morality stuff too seriously. Sorry for getting in your face about it.
Posted by: tomkow | August 17, 2010 at 09:44 PM
Doug,
No I don't want to stipulate that the person uses the "rival" moral theory as a decision procedure. I did not mean to suggest that your Doug could not have there be some such rival moral theory--I was just saying it is crucial to my case that that be understood to be in place. Additionally it is crucial that Doug be understood to not be alegalist, as you note in your last paragraph.
So Doug in some sense cares about morality and thinks it reason-giving. And, crucially, it is not just that he fails to do the thing recommended by his avowed legal theory, he feels distinctively legalistic emotional attitudes (whatever they are!) in a way that is shaped by the rival legal norms and his action is tolerabley shaped as we would expect a person's actions to be who accepted in the normal way that the rival moral theory were true. If we have all that in place, then do you still think it clear that Doug believes the legal theory that he says he does? Isn't it at least clear that this person is deeply conflicted in their attitudes towards the law?
Posted by: David Sobel | August 17, 2010 at 09:46 PM
Mike,
I find that case persuasive.
Posted by: David Sobel | August 17, 2010 at 09:49 PM
Tomkow,
Perhaps if you would go back and read my initial post you would see that much of what is interesting me here is that I do get "morally worked up" about some things, just not the things my theory tells me to get worked up about.
Posted by: David Sobel | August 17, 2010 at 09:53 PM
Isn't it at least clear that this person is deeply conflicted in their attitudes towards the law?
Yes. Doug is. He thinks that there is some reason to obey the law. And he thinks that there is some reason not to obey the law. So he's conflicted. But do you think that there's any reason to doubt that Doug believes that legal positivism is true or that the law requires that he drive at or below the posted speed limit?
Likewise, do you think that there is any reason to doubt that you believe that you're morally required to maximize the good? Or do you concede that you believe that every agent is morally required to maximize the good, but you're arguing that being a consequentialist may require more than this belief? Is that the thrust of the post? Are you denying that X is a consequentialist if and only if X believes that an act is morally permissible just when, and because, it maximizes the good?
Posted by: Doug Portmore | August 18, 2010 at 08:27 AM
Doug,
I was trying to make a go for the stronger claim. I take it you find that claim quite implausible. I don't really feel that you have isolated exactly what it is about the claim that you find so implausible. It is, I admit, a surprising and strong thesis. It would be much weaker to say one is merely not a real C even tho one believes C. But I am not sure what that claim really comes to.
Perhaps you are thinking that the person, such as myself, who claims to believe C can be sincere and quite generally whatever a person sincerely claims to believe, she believes. I would have said we can think we believe something yet be wrong about that.
Anyway, if you have the energy, perhaps you could try to explain what it is exactly in the claim that you find not just surprising or controversial, but dead wrong. These analogies, I fear, are just highlighting that we disagree (or that I continue to think maintaining my view sane).
Posted by: David Sobel | August 18, 2010 at 10:25 AM
Hi David,
Well, in general, I think that S believes that P if and only if S has the disposition to rely on P as a premise in his or her reasoning, to assert that P is true, to assent to P when asked whether P is true, to wager something on P's being true when a good betting opportunity (given one's subjective probabilities) presents itself, etc. On this criterion, I certainly believe that I'm legally required to drive at or below the posted speed limit. The fact that, despite having some motivation to obey this law, I don't obey this law and I'm not inclined to beat myself up for breaking this law is neither here nor there. For the fact is that just because I'm legally required to obey the law and have some reason to do so doesn't entail that I have most reason, all things considered, to obey the law. And, in the case of such traffic laws, I have most reason, all things considered, to break many of them. And my actions and emotions are governed (in a robust sense -- not the weak sense that you seem to be saying that you're actions are governed by a deontological theory) by my judgments about what I have most reason, all things considered, to do and to feel. They're not governed by my judgments about what the law requires.
I'm assuming that when it comes to you and the proposition 'each agent is morally required to maximize the good', the above criterion is met. So it seems to me that you believe this proposition. The fact that the rule/norm that this proposition expresses doesn't govern your actions is best explained by the fact that you don't think that you have most reason, all things considered, to obey this rule/norm -- that is, to maximize the good. Thus, the explanation seems to be the same in your case as it is in my legal case.
Posted by: Doug Portmore | August 18, 2010 at 10:55 AM
Doug, I don't think your criterion is very good, at least if I'm understanding it correctly.
Take a purely descriptive example:
As you will remember, in Philippa Foot's example she asserts the sentence while sitting on an ordinary chair. The point of the example is that she might simply be using the words "pile of hay" in some odd way.But your criterion will not sort Foot's example correctly. For instance, the fictional Philippa* would no doubt infer from what she believes a conclusion that she would express with the sentence, "Somebody in this room is sitting on a pile of hay", and if asked whether what she said is true she would say it was, and if asked whether she would like to wager that she is sitting on a pile of hay she would jump at the chance. And yet Foot's point is that she would not, in that situation, count as believing that she was sitting on a pile of hay.
That is a purely descriptive example (that is, "pile of hay" is purely descriptive). I think the situation gets even worse for your criterion when the expressions are not descriptive.
Posted by: Jamie Dreier | August 18, 2010 at 11:28 AM
Guys,
Good, this now seems to be getting towards the central issues. Doug: I was trying to imagine a person who was split. What they "intellectually" think goes one way and what guides their actions and emotions goes another way. You point to betting behavior as something partly criterial of what the agent counts as believing. That seems sane to me in most cases but it also seems to encroach on the admittedly vague practical vs. theoretical line. That is, if the agent merely judged that it would be good to make this bet or whatever, that would by my lights be clearly on the intellectual part of the line. But if you think it matters that they actually put their money where their mouth is, then I am tempted to say that you are, as I am suggesting, allowing that the action/emotion side as part of what determines what the agent believes.
After speaking against the usefulness of analogies here I came up with one I like. I was thinking we should ask about the case where the person is making judgments about what it makes sense to do which are so divorced from the action/emotion side. If the divorce really is as stark as it could be, I think the Foot/Jamie sort of point compelling that we would not think the speaker really was really using "makes sense" in the way we do.
Finally, I was thinking that the traditional judgment internalist is usually thought to be offering a sincerity condition on moral claims. But I think that is a bad way to put it. The person, I think the judgment internalist should allow, might be completely sincere. She need not set off lie-detector machines (she may even be a judgment externalist and so not think that her failure to be motivated counts against her sincerity in making moral claims). Rather I think the thought is better put in terms of whether the agent really believes what she (perhaps sincerely) thinks she believes. Again, I am not siding with the judgment internalist, just pointing out that they are partners in crime in allowing the action/emotion side to play a key role in determining what the agent believes. I think of my thesis as being less strong than judgment internalism.
Posted by: David Sobel | August 18, 2010 at 12:28 PM
Hi Jamie,
I take your point. My criterion will need some work. Nevertheless, I don't think that my driving above the posted speed limit counts in any way against our thinking that I believe that I'm legally required to refrain from driving above the posted speed limit. So, unless someone can point to some disanalogy between my legal case and the case of David's not doing what would maximize the good, I'm inclined to go with my purported explanation for why David doesn't maximize the good rather than his explanation that he doesn't actually believe that he is morally required to maximize the good.
And is David assuming that moral judgments are not descriptive?
Posted by: Doug Portmore | August 18, 2010 at 12:36 PM
Sure, I think that actions or the lack-there-of can count, in part, as determiners of whether someone has a belief. For instance, if someone is extremely thirsty and yet refuses to drink the liquid in the glass in front of them, then that's some evidence that she doesn't sincerely believe, despite what she may say, both that the glass contains potable water and that drinking it would quench her thirst. And I think that if someone believes that they ought, all things considered, to do X, then we should, absent depression, weakness of the will, and other forms of practical rationality, expect that person to do X. What I'm questioning, then, is whether we should expect someone who believes that she is required, according to normative realm, R, to do X to do X when she doesn't believe that she has most reason, all things considered, to do X. That is, why should we expect normative claims that a subject doesn't take to have rational authority to govern his or her actions?
Posted by: Doug Portmore | August 18, 2010 at 12:56 PM
Gosh, perhaps we have been talking past each other for a while then and perhaps we don't disagree as starkly as it seemed. I did not mean to make the claim that you disagree with in the post of 9:56. I am trying to sort out why you attribute that view to me, but I may need some help.
I stipulated that the person we are talking about is not an amoralist. They think morality matters. That of course is compatible with the thought that other things matter too and some perhaps more than morality. You are right that I did not want to rule that out. But the person thinks that it is normatively relevant what morality says. Given that, how should we best get at what the agent thinks morality says in a situation. I was thinking we look at what actually gets weight in her actions and emotions rather than merely at what she says.
But perhaps I am not getting why you think I am committed to what you say above in which case I will need more help.
Posted by: David Sobel | August 18, 2010 at 01:22 PM
Well, Doug, I am assuming that Dave is assuming that moral judgments are not merely descriptive. If they are, then there is no real issue.
I thought that a widely (not universally, since this is philosophy) acknowledged difference between moral obligation, on the one hand, and legal obligation according to positivists, on the other, is that the first is normative ('robustly normative', as some people put it) and the second is not. Doesn't that seem right to you?
Posted by: Jamie Dreier | August 18, 2010 at 01:28 PM
Hi David,
I guess that I'm confused.
I thought that you were arguing as follows: If you genuinely believed that you are morally required to maximize the good, then your actions and emotions would reflect this (absent depression, weakness of will, and the like). But your actions and emotions do not reflect this belief -- that is, you don't maximize the good and you don't feel bad about failing to do so -- and you're not depressed, suffering from weakness of the will, or anything of the sort. Thus, you conclude that you don't genuinely believe that you are morally required to maximize the good.
Now, you are in fact a person who believes that you do not have most reason, all things considered, to maximize the good. So aren't you someone who believes that he is required, according to normative realm, R, to do X when you don't believe that you have most reason, all things considered, to do X? Just substitute 'morality' for 'R' and 'what will maximize the good' for 'X'.
the person thinks that it is normatively relevant what morality says. Given that, how should we best get at what the agent thinks morality says in a situation. I was thinking we look at what actually gets weight in her actions and emotions rather than merely at what she says
What constitutes someone's taking what morality says as being normatively relevant? Is it that he thinks that he has some, sufficient, or decisive reason to do what morality says? If it's not thinking that there is decisive reason to do what morality says (and I assume that it's not given your own views), then why should we expect that what morality says gets weight? It's only the moral considerations themselves that should get some weight, but, for all that, there may always be better reason, all things considered, to do something other than what morality says you morally ought to do. In which case, we certainly wouldn't expect you do what morality says.
Posted by: Doug Portmore | August 18, 2010 at 02:21 PM
Doug,
I am confident I am not tracking the distinction between moral considerations getting weight and what morality says getting weight, but let me try to say something anyway.
I have been assuming that a factor can be treated as normatively relevant by an agent even when she does not think it normatively decisive. She might treat the consideration as getting pro tanto weight in deliberation even if it does not win, by the agent’s lights, in the fight for what it makes most sense to do. We could see this in thinking about counterfactuals or perhaps in the agent’s attitude towards doing what she concludes she has most reason to do. (And I assume all this need not be consciously available to the agent herself.) So I agree that the person I have in mind should not necessarily be expected to do what they think morality requires. Nonetheless, it could be that their deliberation is shaped in characteristic ways by what they think is morally required. I am looking at what it is that does such shaping. That is, what gets weight insofar as morality gets weight in their deliberation.
I would not have put my argument the way you do in the first paragraph above. I have tried to say that I disagree with the standard judgment internalist in that I think that it is perfectly coherent for a person to be an amoralist and yet make sincere moral judgments that have no motivational tendency behind them. So I would reject the claim that all we have to know about me is that I think such and such is morally required and then we know that, absent weakness of will and the like, I will do what I think is morally required. Indeed, I think that not only are we not entitled to that inference, but we cannot even safely infer that the agent has any motivational tendency towards that which they think is morally right. Thus I think the standard judgment internalist claim too strong. But in a way I also think the claim too weak. For in a person who is not an amoralist, we should expect thoughts about what morality requires to provide more than just some, perhaps very small, motivation in favor of doing what is thought morally correct. In the person who is not an amoralist (such as, I flatter myself, I am), then I think we should look to what plays something like the characteristic moralish role in the agent’s actions and emotions rather merely to what that agent asserts in seminars to learn what they really think is right and wrong. And this moralish role will be a bit vague, admittedly, but it will be especially connected to episodes of real life moralized guilt and anger and have some tendency to shape (perhaps not decisively) deliberation. If we find a very good fit for this part of the role of an agent’s moral belief, then, I was trying out saying, perhaps that should beat out what the agent says in philosophical contexts to count as the agent’s moral beliefs.
Posted by: David Sobel | August 18, 2010 at 03:06 PM
For in a person who is not an amoralist, we should expect thoughts about what morality requires to provide more than just some, perhaps very small, motivation in favor of doing what is thought morally correct.
Why should we expect thoughts about what morality requires to provide more than just some, perhaps very small, motivation in favor of doing what is thought morally correct in someone, such as yourself, who thinks that there is very often sufficient reason, all things considered, to do other than what's morally required of them?
I assume that you have some motivation to maximize the good and, thus, that you do so whenever maximizing the good isn't very costly to yourself. And I assume that you think that you have sufficient reason to refrain from maximizing the good in most of the situations in which you in fact refrain from maximizing the good -- situations in which your maximizing the good would involve significant personal costs.
It seems to me that we should expect thoughts about what morality requires to play only a small motivational role in a person who thinks that there is often sufficient reason to act contrary to what morality requires. Thus, it seems to me that your thoughts about what morality requires plays precisely as significant a motivational role as we should expect from someone who believes BOTH consequentialism and that there is often sufficient reason to act contrary to what consequentialism holds that you're morally required to do.
The main idea is, then, that the extent to which S's thoughts about what she is morally (or legally, prudentially, etiquettely, etc.) required to do will motivate her to act as she thinks that she is morally required to act is a direct function of the extent to which S thinks that she has decisive reason, all things considered, to act as she thinks that she is morally required to act. In your case, I take it, you think that you rarely have decisive reason, all things considered, to act as you think that you're morally required to act. Thus, it seems reasonably to expect that your thoughts about what you're morally required to do will rarely motivate you to act as you think that you're morally required to act. Moreover, you're not unique in this way. Many act-consequentialists (including you, Peter Singer, Henry Sidgwick, Dale Dorsey, etc.) think that there is often sufficient reason to act contrary to the dictates of consequentialism. Thus, we shouldn't be surprised that consequentialists are so frequently insufficiently motivated to act as they think that they're morally required to act. They just don't think that the dictates of morality have the same rational authority that some of the rest of us do.
I'll let you have any last words. Thanks for an interesting discussion, but with classes starting tomorrow I'm going to try to refrain from participating (although continue reading) any ensuing discussion.
Posted by: Doug Portmore | August 18, 2010 at 03:44 PM
Doug,
Thanks for your thoughts and energy for keeping this discussion going. In the end I think we are just interested in different questions. It may be that I should drop the claim that a nonamoralist would necessarily have more than a little motivation to be moral, but I don't think that change would affect the overall argument much.
Posted by: David Sobel | August 18, 2010 at 04:25 PM
That was one heckuva ride...
Posted by: Dan Boisvert | August 18, 2010 at 06:35 PM
David,
I'm curious whether there might be any grounds for describing you as an "indirect" consequentialist of some sort, in practice if not in theory. I'm usually not thrilled about that term, because I think it is ambiguous between a two-level act-consequentialist view like Hare's and a rule-consequentialist view like Hooker's. In this case, though, I'm happy with the ambiguity, because I'm curious whether your practice might conform to either view. To what extent do consequentialist considerations make a difference to what your conscience keeps you from doing? If in the proverbial cool hour you reflect that your doing X will usually have bad consequences, or that bad consequences would result if people generally felt free to X, would this make it more likely that in future you would have to break through a mass of feeling in order to X, one that you would probably encounter afterwards in the form of remorse if you did?
Posted by: Dale Miller | August 19, 2010 at 01:20 PM
Well, in theory I am certainly intellectually persuaded by some variant of indirection at least if that only commits me to the thought that the C is in the first instance offering an account of truth-makers and not decision procedures. But while I find it a very hard question to actually figure out what decision procedure I should have according to my theory, I doubt I am broadly compliant with that. However, it certainly does make a dent here and there. How I direct my charitable contributions (if not how much I direct there) is so sensitive in my view and that I am a veggie is affected by my theory of value, I think. But enough about me--in short I doubt I can make use move to square my theory and practice but hope springs eternal.
Posted by: David Sobel | August 19, 2010 at 02:01 PM
David, I consider my self a consequentialist most of the time. Especially when I reply to blogs. For me, the act of thievery had bigger negative consequences than the inaction when it comes to charity. And I can give reasons for that.
Also, I don't see how morality doesn't have to do with consequentialism . No, it is a moral code, it has everything to do with morality. I just as a person don't subscribe to just one moral code.
Basically I subscribe both to consequentialism and to "I am a selfish bastard and what I want matters" in cases where my morality doesn't always make sense or has to do with the best consequences for the whole world or something.
In fact, I am kind of the opposite, My conscience might be the voice that tells me that consequentialism is right and my other morality is wrong.
Basically, the way I see it, you can be of the belief that consequentialism is the best morality for the world or best morality, but not what you want to be.
Hmm. Can one be of the opinion that consequentialism is the best morality without being that ? (Just like one can believe that Vegetarianism should be followed but not by him) Does that make such person a hypocrite, or if he admits that he doesn't want to do it, but believes that the world would have been better if the philosophy was followed more ?
In my case, for the sake of personal comfort, I don't really care if I am the best person for the world than I can be. I do support that we should all make our morality more consequentialist. And in regards to my own morality I am to some extend just that.
Maybe the problem here is that we are asking from persons too much. Maybe it is incredibly silly to expect an individual to commit 100% to consequentialism. Or to not be influenced at all by it. Rather what we must be looking for is more consequentialism not people being perfect consequentialist robots.
Just my random thoughts on the matter after reading a lot comments here.
Posted by: Elli | September 06, 2010 at 02:44 PM