Some Of Our Books

Categories

« A Question for Kantians | Main | New Ethics Journal »

April 13, 2005

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

David,

I have a question about your objection. Is the following what you have in mind? Consequentialism recommends causing one person to suffer one mild, non-debilitating headache a week for the next sixty years (for a rough total of 3130 headaches) rather than cause each of 3500 people one mild, non-debilitating headache? And let’s assume that having one mild, non-debilitating headache a week for the next sixty years has no negative effects beyond the unpleasantness of the headaches themselves. If it’s this kind of thing that you find objectionable, then I’m not sure that I see the force of the objection. If instead you have something else in mind (say, causing one person the pain and anguish that comes from losing a child versus causing many people mild headaches), then I doubt that consequentialism is necessarily susceptible to the objection. Perhaps, there are lower and higher pains that work similarly to Mill’s higher and lower pleasures.

David,

One more thing: perhaps, you should say what the ground rules are. Can the consequentialist appeal to whatever value theory she likes in responding to objections? Also, how are we defining consequentialism here? Is it, by definition, an agent-neutral theory, as some would have it?

Boy, you try to ask questions and all you get are questions in return. Fair enough.

First off, I am a consequentialist so I am not too taken with my own question. But I worry more about the second version you mention. I take it your reply involves saying that a consequentialist could hold that it is not a better state of the world to have 1 person have terrible troubles rather than an arbitrarily large number of people have small troubles. Maybe. But suppose we were convinced that it would be better for a single person to have the big problem rather than lots and lots of paper cuts. Would you resist that thought? Or maybe the thought is that we change the nature of the harm when we bring it together into one person?

I would recommend that defenses of consequentialism appeal only to value theories that people actually find attractive, rather than one's designed merely to get the C out of a fix.

And yes, if people care what my view is on this, I would say that C is necessarily agent-neutral. But if others think otherwise, please use your own understanding of C.

Dave,
Sorry to respond with yet another question, but I can't resist. You write:

I would recommend that defenses of consequentialism appeal only to value theories that people actually find attractive, rather than one's designed merely to get the C out of a fix.

What else could make a value theory attractive but its being apt to get C out of a fix?

Dave, are you thinking of one of Larry Temkin's examples? John Broome's Weighing Lives discusses a couple of them, and he does a little work to clean them up first. Anyway, those examples do not seem to me to count against consequentialism at all. So maybe you have others in mind.
My favorite objection to consequentialism is that it cannot make sense of supererogation.

Satisficers can account for supererogation, can't they? Just say we are obligated to bring about consequences that are "good enough", and to do any better is supererogatory.

Cool idea btw, David.

I mostly object to consequentialism on the basis of what it would mean if I did. HAH!

Jamie wrote: "My favorite objection to consequentialism is that it cannot make sense of supererogation."

It's not clear to me that this objection works, even against non-satisficing versions of consequentialism.

Consider a standard formulation of utilitarianism according to which an action is morally right iff no alternative to that action would bring about a higher net utility. You might think that this version of utilitarianism can't accomodate the intuition that sometimes we do things that go "beyond the call of duty", and hence deserve extra praise. But it's not obvious that this is the case.

Suppose Joe has two choices, action A or action B. Both have would bring about 100 UTES (the standard measure for utility, of course), and no alternative to either A or B would bring about more UTES. So Joe is morally required to do either A or B, but either A or B is morally permissible.

If Joe does A, then he gets all the UTES. If Joe does B, then a bunch of deprived, sad little kiddies will get the UTES. Joe does B. He wasn't required to do B; he could have done A instead without doing anything wrong. But since Joe is less well-off than he would have been had he done A, I think we can all agree that Joe deserves some extra praise, since he did somethning "beyond the call of duty."

What's wrong with this sort of response?

Kris,
I believe that this is the same line that our own Campbell Brown took in his post Supererogation for Maximisers. In any case, I don’t think that Joe’s doing B, in your example, is supererogatory. I would argue that in order for P’s doing X to count as supererogatory the following three conditions must be met:
(1) P could permissibly have done some other act, Y.
(2) P is worse off for having done X instead of Y.
(3) P’s doing X is morally superior to P’s doing Y. That is, P has more moral reason to do X than to do Y.
Now, condition (3) isn’t met in your example. On the theory that you’re considering, Joe’s doing B isn’t morally superior to Joe’s doing A. Joe has no more moral reason to do B than to do A. And if condition (3) isn’t met, I don’t see how you can say that Joe has gone beyond the call of duty. To say they he has gone beyond the call of duty is to say that he has done something morally better than what duty required him to do. But, on the theory you’re considering, the performance of B isn’t morally better than the performance of A.

Campbell,

Your question seems a good one to start a new line of discussion. I recommend doing that.

But until then, consider the case of a single person. I think it makes sense to say that what is most valuable for that person, or best for her interests or well-being, can be different from what should represent that person in moral contexts. So, for example, Scanlon and lots of other people held that some things that are good for me need not make a moral claim on others while other things that are good for me would make such a claim (for example, especially optional good make no moral claim on others while necessities do make such a claim). I find this intelligible. And I see the sort of intuitions that would drive one to distinguish what was good (here, for a person) from what morally matters about that person. If one combined this with a view about how one moves from "good for" to "good overall" that held that the latter is just added up from the former, then it seems we could have an intelligible story about how what was "best overall" differed from what morality required us to bring about.

Is the thought that this story is unintelligible or just that one could have a notion of "good overall" which had no other ties but to what morality recommends?

Richard: yes, I guess satisficers are consequentialists, and they can handle supererogation. I wasn't counting satisficing as consequentialist, but most people do.

Kris: I agree with Doug that a supererogatory act is supposed to be better than the (merely obligatory) alternative. It's interesting, though, that a consequentialist theory can deliver the right verdict on supererogatory cases -- my insistence that supererogation has to be better is theoretic gloss, so it's not as costly to a consequentialist to reject it.

David: I don't see how your point about 'good for' is relevant. Suppose we agree about a complete theory of what is good for a person. Now we need to know whether and in what way the good for persons contributes to what is good. Maybe it's at that point that we choose our theory of the good just to make sure we can maintain C. Anyway, that's my plan.

Jamie,

I am unsure if you are saying that one can construct a notion of "the good" that never comes apart from what morality recommends (which I would not disagree with) or that there is no coherent notion of the good that avoids the above. I was trying to sketch an instance of the latter. So of course one "could" choose a theory of the good merely in order to maintain C. The question, I am thinking, is if there is enough content to our ordinary notions of "the good" to present us with coherent alternatives to doing that. I was presenting the view that "overall good" is just determined by adding together "good for" as such a coherent alternative.

Kris,
As a good friend of mine is fond of saying, great minds think alike but dipsticks seldom differ. (I wonder if y'all know what a dipstick is.)

Dave,
I like your question. I'll have to think about it, and maybe answer in a new discussion thread.

For now, let me try to respond to Objection 1. I take it the objection is that Consequentialism has the following allegedly counterintuitive implication:

(P) There exists some number N such that it's permissible to allow one person to suffer severe pain in order to prevent N people from suffering mild headaches.

One response would be to argue, as follows, that (P) isn't as counterintuitive as it initially may seem. Consider the following sequence of states of affairs. In X1 one person suffers severe pain; in X2 one thousand people suffer slightly less severe pain; in X3 one million people suffer slightly less sever pain; and so on. If we go far enough along the sequence we'll arrive at a state of affairs in which people are suffering only mild headaches. Now, the consequentialist may argue as follows:

(1) For any n, it is permissible to allow Xn in order to prevent Xn+1.
(2) For any X,Y,Z, if it is permissible to allow X in order to prevent Y, and it is permissible to allow Y in order to prevent Z, then it is permissible to allow X in order to prevent Z.
Therefore,
(3) There exists some Xn such that (i) it is permissible to allow X1 in order to prevent Xn, and (ii) in Xn people suffer only mild headaches.

Oh, I should add that I can't claim any credit for the argument above. Broome gives a similar argument in Weighing Lives, as Jamie already noted, though Broome's argument is cast in terms of betterness (of course). And, as I recall, he borrows the argument from Temkin. Interestingly, Temkin puts the argument to a different purpose: namely, to discedit the second premise (i.e. the one about transitivity) -- one man's modus ponens is another's modus tolloens.

Campbell: Sorry for posting without representing. I didn't catch your earlier post. What is a dipstick? Is that what you use to check oil in your car?

Doug, Jamie: Small question. Doug wrote, "(3) P’s doing X is morally superior to P’s doing Y. That is, P has more moral reason to do X than to do Y."

I'm not sure I understand the clause. Maybe I don't understand what a moral reason is supposed to be. I thought that the following claim was generally accepted, regardless of whether one is a consequentialist, Kantian, Rossian, egoist, etc.:

S's doing A is morally obligatory iff S has more reason to do A than any alternative to A.

(That is, S's reasons for doing A are collectively stronger than S's reasons for doing any alternative to A.)

How could S have two options, X and Y, have more moral reason to do X than Y, and yet both be permissible? It seems to me that something has to give. So I'm suspicious that you've given the correct account of what it is for an action to be superogatory.

My thought was that the correct account would look something like this obviously rough analysis:

(1) P could permissibly have done some other act, Y.
(2) P is worse off for having done X instead of Y.
(3) P deserves praise or displays virtue or deserves to be thought of highly by others who are aware of what P has done because P did X instead of Y.

Superogatory acts are acts that we have reason to evaluate in certain ways; they aren't acts that carry "extra moral weight", that is, stronger reasons for performing them than other permissible actions, because, if they were, those other actions wouldn't actually be permissible. Or so it seems to me.

The utilitarian can consistently adopt this account of what it is to be superogatory. And if she does, then she can apply this account to Campbell-style cases, and get what seems to me to be the correct results.

Jamie: related question. What does 'better' mean in this context? You don't mean "have a higher utility", I guess?

(There are good reasons for a non-consequentialist not to adopt this account of superogatory acts, which is of course only suggested by what you said:

(1) P could permissibly have done some other act, Y.
(2) P is worse off for having done X instead of Y.
(3) X has a higher utility [or brings about more intrinsic value] than Y.

Suppose I have two options, benefit myself (I get 50 UTES) or benefit small, sad child (she gets 25 UTES). No other options are available, no other consequences of the options are relevant, blah blah. I can imagine some non-consequentialist claiming that both are permissible, but that I deserve extra praise if I give the UTES to the kid. But then I don't do something "better" in the sense of generating more utility; in fact, I do soemthing worse in that sense. )


sh*%*t. Doug on the other posts says this as well, I think. So I'm gonna shut up for awhile and read the old post. My apologies.

Dave,
I was presenting the view that "overall good" is just determined by adding together "good for" as such a coherent alternative.

That is an alternative, in the sense of being a coherent, substantive view about what overall good is. However, I'm afraid I don't see your point. That there is such a view doesn't seem to me to call into question in any way the methodology of picking a value theory so as to repair defects in C.

Suppose you say, "I think that Aristotle was correct about what makes a human life better." Suppose I agree with you. Then you say, "So that means that we ought to maximize the net sum of Aristotle-eudaimonia in the world." I reply, "No, that doesn't seem right, because it would mean we'd have to kill innocent people in order to prevent terrorists from blah blah blah." You agree with me. You add, "I guess that means C is false." But now I say, "Not at all, it just means that the good for persons is not the whole story about the general (overall) good."
Isn't this the right general description of the dialectic?

Kris,
I think it is pretty plain that in ordinary cases of supererogation the supererogatory act is better than the merely obligatory one. For example, it is common sense that an assistant professor of philosophy who earns $48,000/year is not morally obligated to give more than $10,000/year to famine relief. However, giving $15,000 this year rather than $10,000 is patently better, morally speaking, and there is no moral reason against it.
It seems to me that if we add Consequentialism to this -- I should now say maximizing Consequentialism -- we find that there can't be any examples of supererogation. But this could be wrong. Maybe I just haven't seen how to understand or represent supererogation within Consequentialism.


Jamie: related question. What does 'better' mean in this context? You don't mean "have a higher utility", I guess?


Well, that would be one substantive view about which things are better than which, but it isn't a definition. 'Better' is the comparative of 'good'. There are lots of substantive ethical theories about what makes things good. Are you asking me to endorse one, or are you asking me something else?

Jamie and Campbell,

Maybe I need to back up and try to understand the claims you guys want to make about goodness and morality.

One claim would be that there is no attractive and coherent notion of "the overall good" that could come apart from what one morally recommendeds.

If that were the claim, then lots of people have thought that they were using the term "the overall good" in a way that allows there to be such space. They thought the notion of "overall good" came with rules for its use (eg is nothing but the sum of individual good) such that it made sense to say that one might or might not morally recommend maximizing the good. I talk to people all the time that feel that they are using these notions such that they can come apart.

I take it you guys want to say that although these people think they have an attractive coherent notion of the good which can diverge from their moral recommendations, in fact they have no such notion.

This is why I thought it relevant to come up with a seemingly attractive and coherent notion of the good which can come apart from a person's moral recommendations.

But let me come at it another way. What would one have to show to show that your view is wrong?

Hi Jamie,

I think I may have asked a bad question. Here's (maybe) a better question. You wrote, "the superogatory act must be better than the alternatives." I want you to tell me what you mean by "better". Telling me that it is the comparitive of "good" doesn't help either! :) There are lots of kinds of goodness, for example, asethetic goodness, extrinsic goodness, perfectionist value, signatory value, goodness for Kris, etc., and accordingly there are lots of ways in which an action can be better than another. Some people talk about the "moral value" or "deontic value" of an action, and claim that some actions enjoy more of this kind of value. Which kind of way of being better is necessary for an action to be superogatory?

That's the question I wanted to ask.

One way of answering it would be to say that you are focusing on Moorean intrinsic value, and that in order for an act to be superogatory, it has to generate more Moorean intrinsic value than its alternatives. That answer seems to me problematic for the same reason that "has a higher utility" seemed to me problematic.

In the example you gave in the previous post, one obvious sense in which giving 15,000$ is better is that it generates more intrinsic value than giving 10,000$. But in general (it seems to me) we non-consequentists don't think superogatory acts have to be better in this sense (Doug's point, if I understand him correctly, from the old post, which I endorsed without realizing Doug beat me to it a few posts ago), so it seems odd to use something that most non-consequentialists don't believe in, in an argument against consequentialism.

So I assumed that there was some other kind of value you were focusing on, and so I'm asking you to tell me what it is.


Kris,
Oh, I see. I meant 'better' in the ethical sense, not the aesthetic sense or any of the others you mentioned. Certainly not intrinsic value, but maybe whatever Moore meant by that minus the 'intrinsic'? But I have a very hard time figuring out what Moore did mean.
In the example, I thought it was obviously intuitively better (ethically better) to give the $15,000. That is much more obvious, as far as I'm concerned, than any particular axiology, so I think it's a good idea to rely on it when you're trying to figure out what axiology to accept.

Dave,


They thought the notion of "overall good" came with rules for its use (eg is nothing but the sum of individual good) such that it made sense to say that one might or might not morally recommend maximizing the good. I talk to people all the time that feel that they are using these notions such that they can come apart.

Just by the way, I doubt that the ordinary concept of "individual good" has enough content to make sense of the idea of summing individual goods. We have to give it more content for our purposes. But suppose your friends have managed to do that.
So now your friends, having identified the overall good as they understand it, ask themselves whether it is morally right to maximize it. Right? Suppose they think it isn't. For example, suppose one of them suggests that instead of maximizing it, we should prefer the state in which it is most equally distributed. Doesn't this mean that the egalitarian friend thinks the equal distribution of individual good is itself good? And isn't he saying that what he'd identified as the overall good isn't really good after all -- equality in its distribution is what's really good? It would be very puzzling if he continued to insist that the realization of the overall good doesn't matter but only its distribution. That's because our contact with the overall good comes via its connection with choice.

Hi, sorry for budging in, but the textbox on the bottom just seemed too inviting :)

I am a... kind of utilitarian myself, and the main problem I see with utilitarianism is death.

How would killing all the miserable people in the world rate in UTES?

Jamie,

Well suppose (forget about what is actual) that the majority of people enmeshed in these sorts of debates in fact distinguished what they thought created the most good and what was morally best. Suppose they thought it mattered how the same amount of good (or bad) was brought about, whether it was caused or allowed, intended or foreseen. And suppose they said loudly that the right was not determined by the good and that what was most good need not be what was most right and suppose they said this was what was distinctive of their view.

In that case the problem would not be that what such people said was puzzling for it would be the common way of talking. And such people would be asserting that they had other and stronger connections to the notion of overall good besides its connection to choice.

Now in such a world would you say that your view was just wrong? Or is your view that even in such a world there is nothing for them to mean by "overall good" except for its connection with choice.

One other way of putting it. I suppose at least that the above way of talking was the common way of talking 20 or 30 years ago. Do you think that despite being the common way of talking, there was just no clear alternative idea connected with "overall good" aside from its connection with choice even back then?

Or is the thought that consequentialists, as it were, lost their nerve, and started allowing lots of considerations to start counting under the heading of "goodness" that once were thought to be disallowed under that heading (distribution consideration, eliminating "nasty preferences, and such). And that once this was done, the only remaining connection with goodness is "what is most worth choosing". On this story, those who insist on a distinction between goodness and "what is most worth choosing" simply have not kept up with the changing times.

Dave,
As to the first question:
Well, look, in some possible world, people mean 'avuncular' by 'good'. This modal fact does not seem to me to count against my view.
As to the second question:
I think there was some pretty serious confusion starting in the early seventies, myself. Partly Rawls's fault, maybe partly due to Bernard Williams. Philippa Foot ("Utilitarianism and the Virtues") tried to straighten everyone out. Campbell and I are helping her.

As a generalisation of Christer's question, and more directly inspired by John Broome's forthcoming article in the Journal of Political Philosophy: how should we value population? How do we evaluate birth's, death's and policies that affect the rates of these, from a consequentialist perspective?

My own intuitions for consequentialism tend to presuppose the existence of persons: I want to maximise some (appropriately defined and aggregated) measure of welfare precisely *because* there exist people with an interest in it. I'm not sure how that allows me to make judgements about whether these people should exist, without:

(a) a new metaphysical conception of the person; or
(b) a non-consequentialist theory.

On the supererogation issue, I wonder whether consequentialism even needs a notion of obligation in the sense that seems to be being sought. Why can't it simply say:

(1) some things are (morally) better than others;

(2) some things rank low (high) enough on this spectrum that we inflict social sanctions if people do (don't do) them;

(3) some things rank high (low) enough that we give people social rewards if they do (don't do) them; and

(4) we draw these lines on the basis of where (from our consequentialist perspective) they would be most usefully drawn, taking into account both their direct effects and effects on incentives.

On this account, the line between "morally required" and merely "permitted" isn't actually a moral one at all: it just needs to appear to be one in order to work.

Kris,

I take a moral reason to be a consideration that counts morally in favor of performing the act. So, for instance, on utilitarianism the fact that doing X will increase someone’s utility counts as a (not necessarily decisive) moral reason for doing X, but the fact that doing X will make someone, say, more knowledgeable does not in itself count as a moral reason to do X. Note also that on utilitarianism the fact that doing B will distribute a given amount of utility in one way rather than another (amongst a number of others rather than amongst only you) does NOT count a moral reason in favor of doing B. Of course, on common sense morality, things are quite different. On commonsense morality, there is a self-other asymmetry and so the fact that doing B will distribute a given amount of utility to others as opposed to yourself does count a moral reason for doing B.

Now here’s what I think that’s going on when you and Campbell claim that utilitarianism can accommodate supererogatory acts. You site examples where (1) an agent may permissibly do either A or B, (2) the agent would be better off if she does A, and (3) others would be better off if she does B. You then claim that doing B is supererogatory because doing what’s better for others as opposed to yourself is, we think, something that it’s morally good to do and hence morally praiseworthy. But distributing a given amount of utility to others rather than yourself isn’t something it’s morally good to do if utilitarianism is right. And note that appealing to moral praiseworthiness doesn’t help because it’s not morally praiseworthy if it’s not morally good to do. If there’s no moral reason/consideration in favor of doing B over A, why should we consider it morally praiseworthy do B rather than A?

So in order to show that utilitarianism can accommodate supererogatory acts, one must show that conditions (1)-(3) below can all be met on utilitarianism:

X is supererogatory iff…
(1) P can permissibly do either X or Y.
(2) P’s doing X as opposed to Y is in some way more costly, demanding, or challenging for P.
(3) P’s doing X is morally superior to P’s doing Y. That is, there is some moral reason/consideration that speaks in favor of P’s doing X rather than Y.

As I see it, you’ve shown that utilitarianism can meet conditions (1) and (2) but not (3). And I think that you’ll never show that utilitarianism can meet all of (1)-(3), because, in any case, where, on utilitarianism, condition (1) is met, condition (3) will not be met.

Kris,

One more thing: You say, "Superogatory acts...aren't acts that carry 'extra moral weight', that is, stronger reasons for performing them than other permissible actions, because, if they were, those other actions wouldn't actually be permissible. Or so it seems to me."

I think that that's exactly what they are. Take Jamie's example of giving to famine relief. The fact that giving more would save more lives does carry extra moral weight. Now you want to claim that if there is, on balance, more moral reason to give $15,000 as opposed to $10,000, then you would be required to give $15,000. Well, this is what the consequentialist must say, but there lies the problem for consequentialism in accommodating supererogation. The consequentialist has no way to accommodate the idea that it could be morally permissible to do either of two acts where there is more moral reason to perform one rather than the other.

Doug,
Suppose that the utilities in X and Y are as follows (there's only two people):

X: me 10, you 10
Y: me 20, you 0

And suppose I choose X. Then it seems that, on utilitarianism, your conditions (1) - (3) are all met. Perhaps you'll say that (3) is not met. But, by your own admission, utilitarianism implies that "the fact that doing X will increase someone’s utility counts as a (not necessarily decisive) moral reason for doing X." It follows that, according to utilitarianism, there is some moral reason that speaks in favour of my choosing X rather than Y, because my choosing X will increase your utility. Am I missing something?

Campbell,

I apologize for being sloppy. What I meant to say, and what I did say earlier, was that P has to have more moral reason to do X than to do Y. The fact that there is some moral reason in favor of doing X as opposed to Y is, as you point out, insufficient for making P's doing X morally superior to P's doing Y.

So, condition (3) should read: "P’s doing X is morally superior to P’s doing Y. That is, P has more moral reason to do X than to do Y."

So do you see any way for utilitarianism to meet conditions (1)-(3) now? And do you accept this definition of supererogation?

Doug,
Thanks for the clarification. To answer your questions: I agree that utilitarianism is inconsistent with the conjunction of (1), (2), and (3); but I'm reluctant to accept your definition of supererogation. It seems to me that your definition makes supererogation impossible, because (1) is inconsistent with (3). Though I think that just takes us back to old disagreements.

Let me ask you a different question: what's wrong with your original (sloppy) definition of supererogation? Why isn't it sufficient that there's some moral reason in favour of X?

I want my def. of supererogation to contain the idea that it involves doing more than one is required to do. I think of that as the main thing about the concept.

And so I like the set up where we contrast X with Y and say both are morally permissible (for an act is not supererogatory if it is the only permissible act because then it is not a case of doing more than one is obligated to do) and the super act is, in some sense, more morally recommended than Y.

I suppose we could have a notion of supererogation in other areas as well--prudential super, etc. but if we are talking about moral super, then it seems the act must be better on that dimension.

I hesitate a bit, or just need to think more about, the requirement that the super must be somewhat costly. Suppose I could press button A or B and that both are acceptable and it makes no difference to me, but B makes the world somewhat better than A. Then pressing B seems plausibly to be super to me.

Doug wrote (sorry about the long quote)"Kris, You say, 'Superogatory acts...aren't acts that carry 'extra moral weight', that is, stronger reasons for performing them than other permissible actions, because, if they were, those other actions wouldn't actually be permissible. Or so it seems to me.'

I think that that's exactly what they are. Take Jamie's example of giving to famine relief. The fact that giving more would save more lives does carry extra moral weight. Now you want to claim that if there is, on balance, more moral reason to give $15,000 as opposed to $10,000, then you would be required to give $15,000. Well, this is what the consequentialist must say, but there lies the problem for consequentialism in accommodating supererogation. The consequentialist has no way to accommodate the idea that it could be morally permissible to do either of two acts where there is more moral reason to perform one rather than the other."

If you are right about this, then it seems to me that the consequentialist is not the only person who has a problem making sense of the superogatory. Any person who accepts the following claim has this problem:

(RO): S's doing A is morally obligatory iff S's reasons for doing A are stronger than S's reasons for doing any alternative to A.

According to RO, you ought to do what you have the most reason to do.

You don't have to be a consequentialist to accept RO. For example, I think Ross accepted RO, and he wasn't a consequentialist. I think I accept RO, but since I think that there are many other sources of reason for action besides facts about the intrinsic value generated by the actions, I don't like consequentialism.

I would have also thought that RO is antecdently much more plausible than consequentialism. (Consequentialism seems to imply RO but the converse isn't true.) When I squint at RO long enough, it even sometimes looks analytic! (Maybe I need to get my eyes checked.) But doesn't RO face the same troubles accounting for the superogatory if you are right? If so, that's an interesting result: there's a problem for non-consequentialists as well.

Kris,

You say, “Any person who accepts the following claim has this problem [the problem of accounting for supererogation as I’ve defined it]:

(RO): S's doing A is morally obligatory iff S's reasons for doing A are stronger than S's reasons for doing any alternative to A.”

I don’t see this at all. Take Jamie’s famine relief example again. Perhaps, given my modest salary, giving $15,ooo to famine relief would mean that I couldn’t purchase the books and journals that are crucial to my professional development. And suppose that I have more reason, all things considered, to provide for my professional development than I do to contribute to famine relief. This is consistent with my having more moral reason to contribute to famine relief. Now, given your RO, I wouldn’t be morally required to give $15,ooo to famine relief since I have more reason, all things considered, to provide for my professional development than I do to contribute to famine relief. Nevertheless, I can account for the fact that giving $15,ooo to famine relief is morally permissible, for surely the following is also true.

(MO): S's doing A is morally permissible if S has more moral reason to do A then some other morally permissible alternative.

Thus, giving $15,ooo will be supererogatory on my definition. And I not only should that this is compatible with your RO. Indeed, I showed that appealing to your RO helps me in accommodating supererogation.

Campbell: Would you accept that an act must be morally superior to some permissible alternative in order to be supererogatory? It seems that you must accept this even if you don’t accept the way I’ve tried to cash out the idea of moral superiority. In any case, it seems clear that utilitarianism can never hold that it is morally permissible to perform an act that is morally inferior to some other permissible alternative. My main point, then, is that it’s not enough for you or Kris to say that utilitarianism allows for the possibility that an act that produces more utility for others can be a permissible alternative to some other permissible act. You have to show in addition that, on utilitarianism, an act that produces more utility for others is in some sense morally superior to one that produces just as much utility overall but less utility for others. But it seems to me that utilitarianism is committed to the view that in no sense is an act that produces just as much overall utility as another ever morally superior to that other.

David: I guess that I should probably think about condition (2) more as well. In any case, I’m not insistent on it, but I am insistent on conditions (1) and (3).

Kris,

I'm sorry but I misread your "iff" for "if." Okay, I see that anyone who accepts RO has the problem. But I don't find RO as obvious as you do, for I believe that it's at least conceptually possible that non-moral reasons can sometimes override moral reasons in which case RO loses all of it's plausibility.

Doug,

Fair enough. If we allow for the possibility that moral reasons can sometimes overide non-moral ones, RO does seem less plausible. (However, I'm not sure I want to follow you here. Maybe more on why later.) But there are still two theses in the neighborhood that still seem plausible even granting this possibility. And both these theses face the same sort of worry we've been talking about.

MRO: S's doing A is morally obligatory if and only if S's moral reasons for doing A are stronger than S's moral reasons for doing any alternative to A.

I think it's clear at this point that MRO faces the worry we've been discussing just as much as consequentialism does; it's more plausible than consequentialism; and MRO isn't problematized by the possibility you mentioned in the way that RO is.

If we are willing to allow that there are moral reasons and non-moral reasons, and that sometimes one kind of reason can trump another, then it seems to me that we should also be willing to allow that there is kind of oughtness -- call it "all things considered" oughtness (sometimes also called "just plain ought[ness]") that some of actions enjoy: some actions we just plain ought to do. Accordingly, we can entertain the following theory:

ARO: S's doing A is "all things considered" obligatory if and only if S's reasons for doing A are stronger than S's reasons for doing any alternative to A.

ARO also seems plausible; when I squint at ARO, it looks analytic; and it still looks like this even when I open my eyes all the way!

Now let's consider a suggestion by David, "I suppose we could have a notion of supererogation in other areas as well--prudential super, etc."

I think this is right; if we have a notion of duty in other areas, we should also have the notion of "going beyond that duty" as well. So if we have a notion of an all-things-considered just-plain-ought, we also have the notion of an all-things-considered superogatory action.

And ARO, despite its plausibility, will have trouble accounting for this as well -- provided that we assume (as I don't think we should) that superogatory acts are ones that we have more reason to do than their alternatives.

Kris,

I think that MRO is problematized by the possibility that non-moral reasons can override moral reasons, because it seems to me that both of the following are true:

(RO*): S's doing A is morally obligatory only if S has most reason, all things considered, to do A -- where we weigh both moral and non-moral reasons to determine what S has most reason, all things considered, to do. (It would be very odd if fulfilling one's moral obligations might turn out to be contrary to what one has most reason to do, all things considered.)

(MO): S's doing A is morally permissible if S has more moral reason to do A then some other morally permissible alternative. (It would be very odd if doing what one had more moral reason to do over what one had less moral reason to do might turn out to be morally impermissible.)

Now, if these two, RO* and MO, are true, and if non-moral reasons can override moral reasons, then the following must be false.

(MRO*): S's doing A is morally obligatory if S has most moral reason to do A.

And so the biconditional expressed by MRO would be false. So it seems that MRO is problematized by the possibility that non-moral reasons can override moral reasons. And whether this is indeed a possibility is probably where we disagree.

I am inclined to think the initial question is poor "favorite objection to consequentialism"

Sorry David ;)

I take consequentialism to refer to a type of ethical theory, (More like a genus really) namely theories that hold what is important to assess the morality of actions is their consequences. (And only their consequences? I'm not sure on this).

Consequentialist theories are then made up of three bits,
a theory of scope: When to count that which has value.
a theory of value: What has value.
a theory of aggregation: How you count it.

So for example the distinction between classical utilitarianism and ethical egoism is simply a difference in terms of theory of scope, the utilitarian (plausibly in my opinion) insists that anything which generates the thing of value must be counted, the ethical egoist insists that just the individual making the decision should be counted.

satisficing is a theory of aggregation, so the distinction between a conventional maximising utilitarian and a satisficing utilitarian come down to simply their theory of aggregation.

I suspect that some of the confusion above was generated by sloppy usage of both the terms utilitarianism and consequentialism, instead we ought to try and say for example maximising hedonistic utilitarianism when we mean that, and the criticism is aimed at that theory.

I am not sure how or indeed if you can go about criticising consequentialism in general. (How do you criticise a type of theory?)

David, to answer your question, I would say that one criticises a type of theory by criticising whatever it is that every token of that type has in common. (There must be something they all have in common; otherwise there would be no type of which they all were tokens.) In the case of Consequentialist theories, which as you put it "hold what is important to assess the morality of actions is their consequences", one may criticise all theories of this type by showing that that's not what is important. Perhaps that's a tall order; I'm inclined to think it is. But I don't think any difficulty arises merely from the fact that "Consequentialism" names a type. Surely, we criticise types of things all the time.

It is perhaps an interesting question how possible it is to criticize C as a class rather than merely this or that version. I suppose this has to do with how much common content we think gets packed into the meaning of C and how much is allowed to vary between instances of C. This, I suppose, partly explains the vast literature on what is constitutive of C and what is not.

It should be noted that some of the most influential writers here aim to be arguing for or against C itself. Suppose, for example, that there were a problem with agent-centered restrictions. If that were so that would force the correct ethical theory in important ways to be close to a C-type view.

Surely, I would say, even if C is too unwieldy to argue against all at once, it is possible to argue for or against views that are closer or further away from the core of the view.


Thank you Campbell, in particular for making the type token distinction, I knew there was a better way to put this, and my students have been struggling with the distinction in terms of genus/species.

And apologies David Sobel, you are of course right that C can be argued against.

I suspect my skepticism about a grand objection to C is a reflection of a general skepticism about grand theorising, I'm inclined to think if you want to object to C then you are best to chip away at it token by token...

On that note here are several common objections to varieties of C:

Demandingness objection:
That C is too demanding. I think there are three variants of this objection ranging from least interesting to most:
1. C is quite hard work
2. C requires me to do things which I think I am obliged not to do (IE the family obligations objection)
3. Poverty of moral universe (IE on some varieties of C every decision, whether it is to play the Banjo or the Flute turns out to be a moral decision)

Responses?

David Hunter: I don't understand objection (3). Could you elaborate a bit more?

Doug: you are right that we disagree about whether moral reasons can be over-ridden in this way. I want to think about this some more.

Returning to the issue of whether superogatory actions have to be "better" in some ethically relevant sense, I ask you to consider the following case.

Joe and me are equally deserving of some UTES. We both loves us some UTES. In fact, we are pretty much indiscernible with respect to any morally relevant feature. I have a UTES distributor: I can either give Joe the UTES or I can give them to me. I haven't promised to give Joe any UTES or promised that I wouldn't. Joe doesn't even know that I have this choice. There's no one else to give them to. Those are my two choices. I give Joe the UTES.

What a nice, unselfish thing for me to. I seem to deserve some praise because I gave them to Joe. I didn't have to give them to Joe; he deserved them no more than I did, and the total amount of UTES produced is the same either way. So both actions are permissible, and neither seems morally better than the other. Still, giving the UTES to Joe seems to merit a kind of praise, which is why I want to say it is superogatory.

So it's not the case that all superogatory actions have to be "ethically better" than their alternatives.

Kris,

I agree that you deserve moral praise for giving the UTES to Joe instead of yourself, but that’s because I'm not a utilitarian. And, in order to show that there can be supererogation on utilitarianism, you need to show that the utilitarian should think that your act of giving the UTES to Joe instead of yourself is praiseworthy. But, on utilitarianism, a disposition to prefer distributing UTES to others rather than yourself when the total utility will be the same either way isn't a disposition that increases your likelihood of maximizing utility. So, in what sense, is it praiseworthy on utilitarian grounds? In order to show that utilitarianism can accommodate supererogatory acts, you must show:

Not only…

(1) that, according to utilitarianism, S’s doing X and S’s doing Y are both morally permissible.

But also…

(2) that, according to utilitarianism, S’s doing X is morally better than S’s doing Y.

As far as I can tell, you haven’t demonstrated (2), but rather you’ve demonstrated only that, according to our commonsense moral intuitions, your distributing the UTES to Joe is better than your distributing the UTES to yourself.

Kris,

Exchange "more morally praiseworthy" for "morally better" in (2) above. I don't see that it makes any difference.

Kris,

One more thing: I do think that giving the UTES to Joe is ethically better than giving the UTES to yourself; it's just not ethically better if utilitarianism is true. I agree with commonsense morality, here, in accepting what Slote calls the self-other asymmetry. As Slote puts it, "our ordinary thinking about morality assigns no positive value to the well-being or happiness of the moral agent of the sort it clearly assigns to the well-being or happiness of everyone other than the agent.”

Maybe I'm beating a dead horse. (Kris: when did you stop beating your dead horse? Dumb joke. Anyways...)

Doug wrote, "I agree that you deserve moral praise for giving the UTES to Joe instead of yourself, but that’s because I'm not a utilitarian. And, in order to show that there can be supererogation on utilitarianism, you need to show that the utilitarian should think that your act of giving the UTES to Joe instead of yourself is praiseworthy. But, on utilitarianism, a disposition to prefer distributing UTES to others rather than yourself when the total utility will be the same either way isn't a disposition that increases your likelihood of maximizing utility. So, in what sense, is it praiseworthy on utilitarian grounds?"

Here's something that seems to me important and worth emphasizing. Utilitarianism, as standardly formulated, is a theory about what makes right actions right, wrong actions wrong, and obligatory actions permissible. It's not a theory about what makes it appropriate for agents to have certain evaluative attidudes. Utilitarianism as standardly formulated looks like this:

(UTE): An action is wrong iff some alternative to that action generates more utility.

This theory doesn't say anything about what evaluative attitudes we should have. I take that to be obvious.

To say that something is praiseworthy is to say that it deserves a positive evaluation; it is to say that certain reactive attitudes towards the thing in question are justified or appropriate.

So we might naturally wonder when our evaluative attitudes are appropriate or justified. We might even want to formulate theories that answer the question "when is an attitude towards something appropriate or justified?" But note again that UTE is absolutely silent on this question. It doesn't have a clause that says that an attitude towards an action is appropriate just in case .....

Now one *could* *supplement* UTE with another theory:

(UTE-EVAL): An action is praiseworthy to the extent that it promotes utility; the more utility an action promotes, the more deserving of positive evaluation it is.

UTE-EVAL and UTE do form a nice package, and it may be that you can argue from UTE to UTE-EVAL, but you would obviously need additional premises. The two views are logically independent of each other.

The moral I draw is that the utilitarian's account of right and wrong actions and the utilitarian's account of when actions deserve positive evaluation can come apart; it may be that not all permissible actions are equally praiseworthy -- and this possibility is consistent with utilitarianism. So even if we insist that there must be some sense in which a superogatory action is morally better (and Doug says, "Exchange "more morally praiseworthy" for "morally better" in (2) above. I don't see that it makes any difference."), it's not obvious that the utilitarian can't account for this difference.

Now, just for the record, I'm not a utilitarian or a consequentialist. I guess I'm some sort of messy pluralist about duties, values, reasons, all of which can conflict, overide, overwhelm, and perplex. But I'm not convinced that the superogation argument against consequentialism works.

Kris,

Fair enough. I agree with most everything that you've said in your most recent comment. So, in order to account for supererogation, UTE would need to be supplemented with some sort of account of when an act is morally praiseworthy, specifically, an account that would allow one act to be more morally praiseworthy than another despite the two being equal in their utility production. And what I doubt is that a utilitarian would be happy with any such account. Furthermore, I doubt that any such account would be consistent with the spirit of utilitarianism. But, in any case, I concede, then, that I haven't shown that utilitarianism cannot accommodate supererogatory acts. To do so, I would need to go back to and defend my original definition of supererogation, which holds that there has to be more moral reason to do X and than to do Y, and then further argue that utilitarianism is committed to an account of moral reasons that doesn't allow for there to be more moral reason to do X than to do Y when X and Y are equal in their utility production. Perhaps, I should think about this more and then post something on this topic later.

Hi Doug,

" And what I doubt is that a utilitarian would be happy with any such account. Furthermore, I doubt that any such account would be consistent with the spirit of utilitarianism."

Here's an account that's worth considering (if you are utilitarian!):

An action is praiseworthy iff the action is caused by a character trait C, the possesion of which tends to lead to utility-maximizing actions.

(this is very rough, but it is in the "spirity of utilitarianism".)

Arguably, if one has a long-standing preference for helping others as opposed to one's self, one will be more likely to maximize utility.

Let's stipulate that in the case just described, my action was caused by the relevant utility-producing character trait. It then counts as praiseworthy, even though it does not produce more utility than its alternative.

Obviously, very rough. But if something like this could be developed, then the utilitarian could avoid the objection from superogation w/o violating the "spirity" of utilitarianism as well. Just a thought.

Doug and Kris,

I am enjoying this exchange. I was thinking of a possible response to Doug when he writes,

"UTE would need to be supplemented with some sort of account of when an act is morally praiseworthy, specifically, an account that would allow one act to be more morally praiseworthy than another despite the two being equal in their utility production. And what I doubt is that a utilitarian would be happy with any such account. Furthermore, I doubt that any such account would be consistent with the spirit of utilitarianism."

I wonder if it is avialable to the C to say something like the following. What traits we should praise depend not only on how well the praised action does, compared to rivals, in creating utility, but also on the effect of praising such acts on others as well. Generally, from a C point of view, a main vice agent's have is excessive concern for their own well-being. Praising people in a way that reduces this trait might create postive effects. It might do so even if the praised act is defined as one that creates no more utility than the less praised act. Generally, a C could say we should use blunt tools to hammer out the chief flaws in human nature. C's can take Aristotle's advice and guard against the nearer vice.

Kris and David,

To be truly in the spirit of maximizing utilitarianism we should say,

An action is praiseworthy iff the action is caused by a character trait C, the possesion of which has a greater tendency than any other C, to produce utility (N.B. this could be through our own actions or through various influences on others).

Now compare C1 and C2:

C1 is the disposition to help others as opposed to one's self, always.

C2 is the disposition to help others as opposed to one's self, except when doing so is unlikely to increase total utility.

It seems that C2 as opposed to C1 has a greater tendency to produce utility. In which case, your act of giving Joe the UTES rather than yourself, an act stemming from C2, is not morally praiseworthy.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


Disclaimer

  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.