Satisficing consequentialism (SC) is the view that an act is morally right iff its consequences are “good enough.” Michael Slote came up with a few well-known examples in support of SC. One involves a fairy-tale hero who, when rewarded by the gods with whatever he asks for, just asks for himself and his family to be comfortably well-off. Another involves a motel owner who gives some stranded motorists the first available room rather than the best available room. I think SC is untenable. It permits agents to bring about a submaximal outcome in order to prevent a better outcome from obtaining.
There are lots of versions of satisficing consequentialism. If you’re interested, look here. (If anyone knows of any other versions, let me know!) I’ll consider just one, the version Tom Hurka calls “absolute level” satisficing, according to which there is a level of utility such that every act must reach that level if it is to be right; any act that reaches that level is right. Of course, there might be situations in which that level cannot be reached by any available alternative. In those situations, Hurka suggests we should maximize utility. Here, then, is absolute level satisficing consequentialism:
ALSC There is a number, n (n>0), such that: An act is morally right iff either (i) it has a utility of at least n, or (ii) it maximizes utility.
ALSC goes wrong with respect to acts that not only fail to cause goodness, but actually prevent goodness. Let n be +20. Suppose that by doing nothing, I could bring about consequences with intrinsic value of +100. Alternatively, I could prevent that outcome by bringing about a different outcome, with intrinsic value of +20. ALSC entails that both acts are morally permissible. But the act of preventing the better outcome is clearly wrong. In general, it is not permissible to prevent a better outcome by bringing about a worse one. (Tim Mulgan gives examples along these lines in his recent book.) Many other versions of SC are susceptible to this sort of counterexample.
The sorts of examples Slote takes to provide support for satisficing consequentialism are examples in which someone fails to bring about a better outcome because they bring about an outcome that is “good enough.” In order for these examples to be persuasive, it is essential that they involve omissions, or allowings. It is permissible, we might think, to forego something better, to allow a better opportunity to pass – to fail to ask for a better reward, or to decline to look for a better room for the stranded motorists. It seems much less plausible to say that it is permissible to prevent something better from happening – to intercept someone’s reward, or to move the motorists out of the best room and into the merely satisfactory one, for no reason. Steering the world away from a better result towards a less good result should be unacceptable to a consequentialist. But satisficing consequentialism allows agents to steer the world towards a worse result in some circumstances.
Of course, we might notice one way in which preventing goodness seems different from allowing goodness to pass: characterizing an act as a prevention makes it seem like a doing, not an allowing. But this distinction is not helpful here. First, consequentialists do not accept a morally relevant distinction between doings and allowings. So for the consequentialist, preventing goodness is on a par with allowing goodness to pass.
Second, even if the consequentialist were to incorporate a distinction between doings and allowings, it would not help. Suppose we were to say that an act of allowing something to happen does not get credit or debit for the value of what is allowed; one must “actively” bring about an outcome in order to get credit for it. Consider two situations. In situation one, I can either (a1) allow a greater good to pass by, thereby allowing a lesser good to exist instead, or (a2) cause the greater good. In situation two, I can either (a3) prevent the greater good by causing the lesser good, or (a4) allow the greater good to come to pass. I take it that a3 is wrong. But if a3 is wrong, a1 must be wrong as well. For in doing a1, not only do I fail to bring about the greater good, as I do in doing a3; I also fail to “actively” cause anything good to happen. At least a3 gets credit for bringing about the lesser good; a1 does not get credit even for that. But satisficers of all the varieties discussed so far must believe that doing a1 in situation one is permissible.
There is a sort of satisficing consequentialism that you might think remains untouched by this sort of example. The idea behind this sort of satisficing view is that it is permissible for an agent to forego the best outcome when there would be significant personal sacrifice involved in bringing about the best outcome. This way of thinking about satisficing is endorsed by John Turri, who says (in a forthcoming article) that “an outcome O is good enough only if O is at least as good as the best outcome the agent could have produced in the circumstances without sacrificing something of appreciable personal importance to her.” Call this “self-sacrifice satisficing (SSS)”
Of course, since this is only a necessary condition, it does not provide us with an alternative to maximizing consequentialism. Just for fun, let us consider a version of SSS in the spirit of Turri’s proposal that provides both necessary and sufficient conditions for moral rightness:
SSS An act, a, performed by person S, is morally right iff the utility of a is at least as great as the utility of any alternative to a whose utility for S is “good enough.”
As we’ve seen, there are many ways to resolve the vagueness of “good enough,” so there will be many versions of self-sacrifice satisficing.
SSS is not what Slote had in mind by satisficing consequentialism. No personal sacrifice is involved in asking the gods for more good stuff. But in any case, SSS is subject to counterexamples. Suppose that in a certain situation, some great benefit will accrue to others unless S prevents it by bringing about a lesser good for himself or others. Suppose these are the utilities of S’s alternatives:
Alternative Utility for S Utility for others Total utility
A1 0 1000 1000
A2 0 50 50
A3 50 0 50
Suppose that neither A1 nor A2 is good enough for S, but A3 is. Then A3 is permissible according to SSS. But since A2 is as good as A3, SSS entails that A2 is also permissible. But A2 cannot possibly be permissible. A1 involves no greater sacrifice than A2, but has much better consequences.
So SSS fares little better than other versions of satisficing consequentialism. Like those other versions, SSS permits gratuitous prevention of goodness. To be fair, I reiterate that Turri intends to provide only a necessary condition on an outcome’s being good enough, and suggests that he would not endorse SSS as a sufficient condition. Perhaps there is some way to supplement this necessary condition with other conditions that would rule out the sort of counterexample just presented.
Hi Ben,
In arguing against ALSC you describe a case where you could do nothing (and bring about consequences with intrinisic value of +100) or act (and bring about consequences with intrinsic value of just +20). ALSC would treat both options as morally permissible, and you take this to be a counterexample to the position.
What do you think of the following (fairly standard lines, I think): Perhaps both options are permissible, but we might judge you to be a worse person if you choose to prevent the better outcome. That is, we could have a negative aretaic assessment of you, even if your action is permissible. For example, if you prevent the better outcome out of cruelty, selfishness, or jealousy, etc., we can have negative appraisals of your motives or character, even if the action itself is permissible.
And a defender of ALSC can (of course) also hold that not acting is better than acting in this case (after all, not acting produces better consequences) - again, even if both options are morally permissible. So there is recognition of the fact that acting is not as good as allowing the better outcome to occur.
You write: "Steering the world away from a better result towards a less good result should be unacceptable to a consequentialist". I suppose that defenders of ALSC would hold that such steering only becomes impermissible when if fails to produce the minimum required absolute level of utility (they are still consequentialists). And in other cases, they can still appeal to negative evaluations of character or motive - even if the action is permissible.
Posted by: Jason Kawall | June 29, 2005 at 11:14 AM
Hey Jason,
Good point, maybe that's what the satisficer should say. When I think about the case, trying to be sure that it's the *act itself* rather than the character of the agent that I'm evaluating, I still think preventing the greater good is wrong.
Suppose the goodness-preventing act were done not out of some bad motive like jealousy, but out of a good motive. Then what would be left to evaluate negatively here, if not the act? In that case, I'd want to say that the act was wrong, but the agent was good.
If the satisficer says that the goodness-preventing act is permissible in this case, I think he is the one who is making the mistake about what he is evaluating. He's taking a feature of the *consequence* of the act that merits a pro-attitude (its intrinsic goodness) and transferring that pro-attitude to the act producing it. But the *act* doesn't deserve any pro-attitude. It's the worst thing he could have done.
Posted by: Ben | June 29, 2005 at 12:08 PM
Hi Ben,
Hmm. If the agent were to act out of good (or even neutral) motives, and the act were to produce the minimum required amount of utility, then I'm not sure that any strong negative evaluation would be warranted. After all, the motive would be acceptable, and the consequences would be acceptable (even if not the best available).
You write (concerning such a case) that "the *act* doesn't deserve any pro-attitude. It's the worst thing he could have done". I'm not sure what to say about this. What if there were other actions that wouldn't even have produced the minimum required amount of utility? The agent's act wouldn't be the best, but it wouldn't be the worst. And even if it were the worst (suppose no other actions were possible), as long as it produces at least the required amount of utility, it's not clear that we should deem it to be wrong. The *act* is still one which produces acceptable levels of utility. Put otherwise: it would be a permissible act, even if it were the worst available permissible act. So, the act itself could deserve a pro-attitude for the kind of act it is (one that produces satisfactory amounts of utility), and not through a problematic 'transfer' of a pro-attitude concerning the act's consequences.
Quick question - could you say a bit more about how are you distinguishing between acts that deserve a pro-attitude, and acts where we improperly 'transfer' a pro-attitude towards consequences to an act? I worry that what I say the previous paragraphs doesn't quite address your point...
Posted by: Jason Kawall | June 29, 2005 at 02:16 PM
Here is what I was thinking. Acts merit pro-attitudes in virtue of the difference they make to the value of the world. In the sort of case I'm thinking of here, there's an act that has intrinsically good consequences, but it makes the world worse than it would have been if the agent had just butted out. So my hypothesis is that satisficers have an appropriate pro-attitude towards the consequences (considered by themselves), and wrongly transfer that attitude to its cause.
Of course, this way of evaluating acts is incompatible with ALSC, because ALSCers determine the rightness of an act just by looking at what it causes, not at any alternatives (except in cases where the threshold can't be reached). I think that's a problem for ALSC. Suppose I can eliminate world hunger or have a sandwich. I choose the sandwich. Surely you won't say "good decision!" Not even if I enjoy the sandwich. Whether a decision merits a pro-attitude depends on what the options were. (This point doesn't apply to every sort of satisficing, just absolute level versions.)
I don't know if that is helpful. I have another argument against absolute level satisficing that I might try out when I am less tired.
Posted by: Ben | July 01, 2005 at 12:38 AM
Ben,
I think your case against satisficing versions of C has much initial intuitive appeal. Indeed, I am tempted to say it is another example of what I was trying to claim in my post on Pea Soup just before yours--namely that attempts to lessen the demandingness of C without a causing/allowing distinction (or some such) are not very plausible. I concluded that therefore the demandingness objection cannot stand alone and only looks plausible when supplemented with independently motivated deviations from C.
Posted by: David Sobel | July 01, 2005 at 02:00 PM
Thanks David. You might be right about that. In any case, it looks like we'll be saying pretty similar things at our Dartmouth session, which should be fun.
Posted by: Ben | July 01, 2005 at 03:12 PM
Rats, we are at the same time at the Utilapaluza at Dartmouth.
Posted by: David Sobel | July 01, 2005 at 05:26 PM
If I understand the schedule correctly, we are in the same session.
Posted by: Ben | July 01, 2005 at 07:16 PM
Ben,
Perhaps you aren't still checking the comments to this post, but I just became aware of this site. I agree with your criticisms of those versions of SC, but the version I like is the one you mention briefly, and then only to say that you won't discuss it because it isn't really worked out. That version holds that the satificing threshold is contextually specified (Earl Conee mentions this as a plausible view in, I think, a paper against moral perfectionism, but he does not develop it). No one has really worked out generally how to determine the morally relevant contextual factors or particularly what they are (I need to look at the epistemology/ language literature for some ideas), and you worry that the view may collapse into a kind of particularism. I think it need not. Perhaps some contexual factors can be identified without a full theory of how to determine what they are. (Obviously, the contextual features will need to be such as to meet objections from you and Mulgan without being ad hoc...)I don't know how much you want to pursue this topic, but I have more to add if you are curious.
Posted by: Rob Epperson | July 07, 2005 at 08:14 PM
Hey Rob,
Yes, I am definitely curious. Tell me more!
Posted by: Ben | July 07, 2005 at 09:49 PM
Right you are Ben! Cool.
Posted by: David Sobel | July 07, 2005 at 11:31 PM