Some Of Our Books

Categories

« The 2011 Purdue Summer Seminar on Perceptual, Moral, and Religious Skepticism (June 8-24) | Main | Three Rivers Philosophy Conference 2011: Science, Knowledge and Democracy April 1-3, 2011 »

September 27, 2010

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Hi Jussi,

I like your analysis. I think that you're exactly right.

P.S. I know that there's not much point to my posting this comment, but since I always wanted someone just to say "you're right," I thought that you might appreciate such a comment as well.

Hi Doug,

aww - thanks. How nice of you! I'm going to start doing this too. Maybe I remember this wrong, but I recall seeing your name in the acknowledgements of the paper. I'm sure you've said something similar about their view.

Reminds me of a conference where a friend of mine gave a paper and there was no question and a very long awkward silence. He really felt bad about that. Only later after the talk did it became clear that everyone had just agreed. I wish someone had realized to say sooner that 'yep, that sounds right'.

Not that there probably isn't an answer by Jamieson and Elliot to what I say above.

Yes. I did comment on an early draft of their paper. I don't remember the details, but I believe that my comments were focused mostly on the general plausibility (or implausibility) of progressive consequentialism. I wasn't so much concerned to discuss whether it fares better than satisficing consequentialism, for I think that dual-ranking versions of act-consequentialism can avoid the demandingness objection while also meeting the arbitrariness objection.

Doug, you're right.

Josh, you're right.

Wow! Assuming that agreeing is transitive (maybe it isn't - could x agree with y, y with z, and x not with z?), I believe that this is the first time anyone has agreed with me. Thanks guys!

Here's an idea. Suppose you bracket the problem of finding the alternatives at a time. Let the set S = {a0, . . .,an} the alternatives open to S at t, where a0 would yield the best outcome, a1 would yield a slightly less good outcome, and so on. Instead of looking for a baseline, locate a topline. The topline is the best I could do and in general (though not always, sometimes it will be less bad than every alternative) will be some positive value n. The rest is an empirical matter. Certainly the property of being demanding reflects the difficulty of conformity with the principle over time. Ask whether it is too difficult for agents to, in general, perform actions between the best producing n and and half of the best producing n/2. If that's in general too hard, then consider between n and n/3. We can then come to some reasonable conclusion about when a principle is appropriately demanding. Suppose we come to some such conclusion. Then a reasonably demanding utilitarian principle will require agents to perform actions in the interval between the best n and, say, n/3.

Mike,

there's nothing wrong with that idea as such. In fact, I don't think that it's any better or worse than either traditional satisficing or progressive consequentilism. In fact, it seems like all three views are logically equivalent as whatever version you take of one of the views there will be a co-extensive versions of the other two views. If the right actions are in the range of n and n/3, there will be some counterfactuals where the same acts improve things and also a view where you have to create more value than x utils that permit the same acts. So, it seems like these are just notational variants of each other.

Also, the methodology seems to be the same. We have some intuitions about which acts are permissible and not too demanding. We then use those intuitions to fix the right counter-factuals, appropriate range from the top, or the right baseline. The traditional objection to this is that it is arbitrary or ad hoc. I'm not sure that's quite the right word but at least the resulting views seem to have little explanatory potential left. But, then again, providing explanations might not be what ethical theories are for.

Jussi, I ask mostly as a curiosity (not having read J&E's paper), but what would their likely reaction be to recharacterizing their view as a form of mono-deontic intuitionism or deontology? In other words, the view could be seen as claiming that there is a single duty: beneficence, in W.D. Ross' parlance. I realize their putting this forth as an alleged advance of consequentialism, but given your criticisms, the view seems more plausible as a species of non-consequentialism.

The traditional objection to this is that it is arbitrary or ad hoc. I'm not sure that's quite the right word but at least the resulting views seem to have little explanatory potential left.

Doesn't the empirical evidence undermine the ad hoc objection? We're not merely selection an interval arbitrarily, we're selection one that we have confirmed does not demand too much or too little. How is that ad hoc?

In fact, I don't think that it's any better or worse than either traditional satisficing or progressive consequentilism. In fact, it seems like all three views are logically equivalent as whatever version you take of one of the views there will be a co-extensive versions of the other two views

Several of your objections appeal to the difficulty of finding a baseline. I agree with this. But finding a topline is not hard. So this view has the advantage (or, so it seems) of avoiding all of the baseline location worries. The logical equivalence claim seems a bit strong, but who knows.

Michael,

I'm not sure about that. At least in Ross, the duty of beneficence is a maximizing one. If there's no other relevant moral claims present you ought to help others as much as you can. There's no slack in this respect. But, even if you could phrase J&E as saying that there's only one duty - the duty of beneficence - this duty is special in that it's only a duty to make things better and not the best. Of course this is deontological in the sense that there are moral options. You are permitted to bring about sub-optimal outcomes. Of course, satisficing is a deontic view too in this sense. So, you are right that there are tricky questions here about which views we should call consequentialist and which deontological (this never has seemed to interesting question anyway, but that may just be me).

Mike A,

I'm not sure how that is an empirical process. Of course, you need empirical evidence of what the actions are, what they demand, and what the consequences are. But, to think about whether an act demands too much or too little is a priori. And, it is something all these views take an advantage of. And, the main point is that we do this first, and only after this we build our theory to fit the intuitions. Some people think that this should go the other way around. We first build our moral theory on some solid, fundamental foundation, and then our theory gives us a vantage point to assess and justify our intuitions. Clearly, this isn't possible if our theory is fixed by our moral intuitions about demandingness.

Also, I don't see what help the topline is. Honing in from that to the appropriate range of permissible actions seems to be the very same process as looking for the baseline. The top is up there on both views.

I'm not sure how that is an empirical process. Of course, you need empirical evidence of what the actions are, what they demand, and what the consequences are. But, to think about whether an act demands too much or too little is a priori.

This is a pretty clear point of disagreement. I don't know how one could determine apriori whether a theory is too demanding. Certainly, you'd have to have some information about moral agents, their psychological, cognitive and physical capacities, and so on. All of that is known empirically, I think we can agree. There are no doubt worlds in which requiring that moral agents always do the best is not demanding at all; the capacities of agents in those worlds W might make it a breeze to do so. And there are worlds W' in which it is extraordinarily burdensome to always perform the best action. I don't think we could know where our world stands without some observation. And I don't think I could know apriori, for every world in the range from W to W', that one everyone in evey world in that range ought to do the best or that not everyone need do so.

Also, I don't see what help the topline is. Honing in from that to the appropriate range of permissible actions seems to be the very same process as looking for the baseline. The top is up there on both views.

Yes, the topline is available for every view. Your objections to those views focus on their failure to locate a non-arbitrary baseline. I'm urging that they don't need to locate a baseline, they can instead locate a non-arbitrary required interval of actions from the topline. It is non-arbitrary since there is (in principle, anyway) an empirical justification for the interval.It is selected on the basis of how demanding it is on actual moral agents, given their psychological, cognitive and physical capacities.

Mike,

I'm afraid I'm still not following you. Maybe it would be helpful to distinguish between too notions of demandingness which I worry that you are confusing.

First, let us say that there is a descriptive, empirical notion of how much an act demands 'effort' in circumstances (though there are worries of how to measure this - maybe in terms of how much personal projects the agent will need to overlook). So, we need to know whether statements such as:

Act A1 in C1 demands x amount of effort.
Act A2 in C2 demands y amount of effort.
Act A3 in C3 demands z amount of effort.

To know whether these are true, you are right, we need to know a lot about agents, their psychological, cognitive, and physical abilities, how the world is circumstances C1, C2, C3, what kind of consequences the actions have in them, and so on. But, all of this will be a posteriori.

However, there's also another normative notion of demandingness that we use to determine whether morality could require x, y, or z amount of effort from the agents in certain circumstances for some good consequences. Here we ask is it the case that an agent *is obliged* to do A to bring about outcome O even if that would require x amount of effort from her. So we are in the realm of obligations and oughts here. One way to think about this is that we can form the following kind of conditionals from the previous claims to get:

If act A1 in C1 demands x amount of effort, then the agent is obliged to do A1 because morality can require x amount of effort for bringing about O1.

If act A2 in C2 demands y amount of effort, then the agent is no longer under an obligation to do A2 because morality cannot require y amount of effort for the sake of bringing about O2.

Now, what I don't see is how these conditionals could be empirically tested. We can test whether the antecedent is true empirically, but that doesn't tell us whether the whole conditional is true. Neither does knowing what world we are in tell us anything about these conditionals. Of course, after we know which of these conditionals is true and which world we live in, we can tell what morality requires from us in our circumstances. However, to get to that point, the crucial piece of information - the conditionals - is a priori.

Let me know if you think that there is an empirical way of knowing which of these kind of conditionals. If there is, you have discovered an empirical way to test ought claims. This would be a major coup in moral philosophy.

As long as we need to know the conditionals a priori, even on your view locating the interval is based on normative intuitions. This is because it is the conditionals that do the work of telling us what the right interval from the top is for the correct ethical theory. Just having the antecedents of the conditionals won't be enough for this. I'm not saying that this process of starting from the a priori conditionals makes your view arbitrary. It does make it just less explanatory.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


Disclaimer

  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.