Some Of Our Books

Categories

« CFP: ETMP 10th Anniversary Conference | Main | Because I Have To »

June 21, 2007

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Hi Ralph,

Suppose that it's right that "These different degrees of agential involvement in causing harm may make a difference to the agent- and time-relative goodness or badness of the relevant features of the available options."

Plausibly the reason this is correct is that we have intuitions with respect to proportionality. The more involved an agent is in determining an outcome that we would describe as bad or good, the more resposible that agent is. So, agential involvement is proportional to agential responsibility.

Suppose that's right. I think we have another proportionality intuition. The more responsible an agent is in determining a bad or a good outcome, the more blameworthy or praiseworthy that agent is for causing that outcome. So, agential responsibility is proportional to agential praise and blame.

That said, insofar as we have deontological intuitions to the effect that impermissible actions are actions for which an agent deserves blame, or comparatively, that an action is more impermissible, or more likely to be impermissible, to the extent that the agent is more blameworthy, or more likely to be blameworthy, for performing that action, then we have an independent explantion that may ground the deontological verdicts with respect to deviant trolley car cases.

Is that plausible to you? Since I'm a Consequentialist and I think we ought to draw a distinction between actions for which the agent deserves praise or blame, and actions that ought to, or ought not be done, I'd go on to say that these intuitions mislead us insofar as we mistakenly take agential involvement to partly determine the moral status of the act, rather than the appropriate appraisal of the person. In fact, if this is correct, then Consequentialists have a nice error theory, appealing to agential involvement, concerning exactly why one might be mislead to make this mistake.

This is just a small irrelevant point. There is an odd consequentialist flavour in the deontology you present. 1-4 seem to be good and bad making features of the act. In a trolleyesque situation, the suggestion seems to be, we ought to do the sums how *good for the agent* these factors make the various options. It then seems like an implicit assumtion that the agent *ought* to choose the option on top of the value list - to maximise value for her.

We've had long debates here about how to define deontological views. One proposal was that being deontological is always a theory-relative term. So, in a way if we assume that traditional, agent-neutral consequentialism is not a deontological view and others are, then your view still counts deontological. But, on the other hand, if we take agent-relativising consequentialism to not be deontological, then it is not. At least the view doesn't leave permissions not to maximise value. I'm not sure this is the way to go. I guess my preference would be to take 1-4 in the deontological view directly as right- and wrong-makers.

I'm sure Mark S will write more about whether agent-relativising value succeeds to begin with.

Thanks, Christian! Thanks, Jussi! Here is a quick response.

1. Christian -- I strongly agree with your point that "we ought to draw a distinction between actions for which the agent deserves praise or blame, and actions that ought to, or ought not be done". However, I deny that this point supports the consequentialist error theory that you advocate. In my view, the degree to which I am (as I put it) "agentially involved" in causing a certain harm is not the same as the degree to which I am "responsible" for that harm. I might be massively agentially involved in causing a harm, but not responsible for it, if I have an excuse for my action -- e.g. non-culpable ignorance of an appropriate kind. Or my responsibility might have been mitigated, e.g. because I was provoked beyond endurance or the like. So I think that these degrees of agential involvement in causing harm are directly involved in determining what one ought to do -- not just in the degree to which one is blameworthy for having done it.

2. Jussi -- Yes, you're right, we have discussed the issue that you raise before. My view (which I think I've already aired on PEA Soup) is that there is a perfectly ordinary way of using expressions like 'What is best thing for me to do now?' in English (and I bet the same is true of Finnish as well!) -- where this use of 'best' has to be interpreted as expressing an agent- and time-relative notion of value. (I also have more formal reasons -- having to do with the logic of dyadic deontic operators -- for thinking that there must be a ranking, with the structure of an agent- and time-relative betterness-ordering, that is related in this way to the truth about what one ought to do. But those reasons probably don't have the intuitive pull on most philosophers that they have on me....) So, I'm quite happy with the view that what you ought to do now is anything that is a necessary part of whatever is the best thing for you to do now.

Anyway, the term 'deontological' is of course a technical term, used only by philosophers. As I think I've said before, the most useful way to use it, as it seems to me, is so that the paradigm case of a deontological view is a view according to which it might be wrong for you to kill one innocent person now even if that is the only way for you to prevent three innocent people from being killed by someone else some time afterwards. By that test, my view counts as paradigmatically deontological!

Here's a quick P.S. to my replies to Christian and Jussi.

1. Of course, I should concede that what I have called the agent's "degrees of agential involvement in causing harm" will be more strongly correlated with the agent's degree of responsibility for the harm than anything that a consequentialist will regard as relevant to determining the rightness or wrongness of the act. I just wanted to emphasize that -- because of the phenomena of excuse and mitigation of responsibility -- I would reject any identification of these two dimensions along which acts can be assessed.

2. In replying to Jussi, I might add that there is a big danger in refusing to make use of the agent-relative 'better' or 'worse' and the like. It is awkward and cumbersome to talk about 'degrees of rightness' (although it is possible to talk about the degrees to which someone approximates to, or falls short from, acting rightly on a given occasion). So, if we restrict ourselves to terms like 'right', 'wrong', and 'ought', we may end up ignoring one of the most important facts in ethics: viz., the fact that practically everything that matters in ethics depends on comparisons between items that come in degrees. To put it another way, there are fundamental continuities in ethics: different cases can easily be arranged into a spectrum of cases, ranging smoothly from the obligatory to the permissible to the impermissible (with grey areas in between of course), and an essentially comparative term like 'better' or 'worse' is ideally suited to bring out this fundamental fact.

Thanks Ralph. Suppose you're right when you say, "because of the phenomena of excuse and mitigation of responsibility -- I would reject any identification of these two dimensions along which acts can be assessed." I agree. I only meant to suggest they are proportional, not identical, leaving open the idea that there may be another factor that might, in some cases, make them disproportional. But now what I want to do is to push you to describe a trolley case in which you have the intuitionsthat (i) the agential involvement is high, thus contributing disvalue to the badness of the act, (ii) the responsibility is low, given some excuse or mitigating factor, while the act is also (iii) impermissible. I think that kind of case would be needed to help undermine the error theory I'm pushing for.

Ralph (or anyone else),

So what we need to test the view seems to be cases in which the involvement with the positive outcomes are more entangling than those with the negative, or vice versa and where we can vary the degree of involvement with the one without also varying the degree of involvement with the other. (Granting you this use of betterness for now, despite my general worries about using agent and time relativity to capture what non-consequentialists are on about.) Do you have any cases?

FWIW, the main idea seems plausible to me off the cuff. I'm tempted by the idea that while the avoidance of enough badness can override a duty not to harm it might take more badness to get one also to justifiably ignore double effect. It looks like your theory would be in a position to explain this idea at least if in DDE case what one did made one more involved as an agent than just bringing about the same harm without aiming at it.

Ralph,

I don't think there's much disagreement here. It's just that, well, this sounds fine:
"what you ought to do now is anything that is a necessary part of whatever is the best thing for you to do now."

But, then I start to think of cases like the ones Wallace and Dancy talked about. Going to see one film rather than another might be the best thing for me but ought I go see it rather than the other? Ought I not go see it? I was never big on maximising.

Also about this:
"if we restrict ourselves to terms like 'right', 'wrong', and 'ought', we may end up ignoring one of the most important facts in ethics: viz., the fact that practically everything that matters in ethics depends on comparisons between items that come in degrees."

I notice that you have a 'may' there. I don't think that this necessarily happens though. It's true that oughts and rights and wrongs are more at home on the over-all level. But, we can use reasons, rigth-makers, wrong-makers and so on to catch the contributory level that comes in degrees. I also think killing is more wrong than breaking a promise, and that sounds natural to me.

I have a question though that's more on the original point. You seemed to attach agential involvement to causing harm and manipulating the events. That kind of suggests that you are making the doing/allowing distinction and that this notion works on the doing side of things. I wonder if you want to say this. My intuition is that you could use the notion on the omission side too. Some of the language you use about intentions fit this picture. So a doctor who carefully ensures that the patient gets no help is more agentially involved and worse for that than someone who fails to realise that the other person needs help and therefore fails to save her. Neither one necessarily causes harm or is manipulating the events.

Very interesting discussion. I have a couple of comments
1) Imagine that you are walking by the banks of the river and see a baby drowning. All you need to do to save the baby is to bend over and pick it up. If you do not do so have you caused the baby's death or simply allowed it to die? I think that you have cuased the baby to dies because you are a causal factor in determining the outcome. If this is correct then the person who does not throw the innocent person in front of the trolley is causing the death of the five other people. (Of course one would have to believe that doing so would, in fact, stop the trolley and this seems a bit far-fetched.) The distinction between 'causing' and 'allowing' seems to me to be not very useful in this scenerio so I think that (your )1 is not true.
2) When considering a 'trolley-like' problem why not look at it as a choice between evil outcomes and, following Ignatieff, utilize the principle of the lessor of two evils in choosing which outcome to pursue?

Very interesting, but I’m not persuaded that, in this context, thought, effort, and manipulation are morally relevant. To see this, compare two cases where you can push one person in front of the trolley to save five. In case one, all you have to do is push this person once, and he’ll fall onto the track. In case two, you have to carry this person through an elaborate obstacle course – a task that requires considerable thought, effort, and constant manipulation of the course of events. If we assume that the person to be thrown onto the track is unconscious in both scenarios, these cases strike me as morally on a par with each other.

If I’m right, then it seems to me that all you have to distinguish the case where you redirect the trolley into a person from the case where you push a person into the trolley’s path is that the former involves foreseeable harm while the latter involves intentional harm. This distinction, as I’m such you know, is very problematic. To be fair, you do mention a possible distinction between manipulating a process versus manipulating a person, but it seems to me that this distinction can’t really explain why redirecting the trolley into a person is permissible while pushing someone into its path isn’t. Rather, this distinction merely restates your intuitive reaction to these cases.

Hi Ralph,

A question about agential involvement. Reconsider the classic trolley. Suppose if I divert the trolley it will hit Smith and come to halt saving five lives. But suppose Smith need not be killed. He's sitting on the track wired to his iPOD and could be warned off. I know this but I'm not now in a position to warn him. My intuition so far is that I cannot divert the trolley.
Now add this. Suppose it would be extremely involved, but I could get a message to someone to warn Smith off the track in time. In this case, I know, the trolley would go around the loop and kill only three people (of five).
Among the options are (a) and (b).

a. Divert the trolley and let it kill Smith.
b. Divert the trolley, get very agentially involved, in order to let the trolley kill three.

Obviously (b) includes much more involvement on the part of the agent and more deaths. Yet (b) seems (to me anyway)better than (a). Of course, not everything is equal, but it seems to me that no matter (almost) how much involvement was required, it would still be better to get that involved to preserve fewer lives.

Mike,

wouldn't this
"it seems to me that no matter (almost) how much involvement was required, it would still be better to get that involved to preserve fewer lives."
imply that you would push the person with a large back-pack down the bridge to stop the train in the second of the original cases? Just curious. I mean I'm not sure I've ever had any serious intuitions about these cases when they get complex.

And, I think it is compatible with Ralph's proposal that agential involvement is only weakly bad-making. I'm starting to think what it really the agential involvement is evidence for the intentions and valuations of the agent. We assess the goodness and the badness of the agent in question on the basis of those revealed attitudes, and our judgments of the action, and its badness, reflect on what we think of the agent. Maybe there would be room for someone to argue that this doesn't make the action per se any worse or better. But, separating the evaluation of the action from the evaluation of the agent seems difficult.

wouldn't this, "it seems to me that no matter (almost) how much involvement was required, it would still be better to get that involved to preserve fewer lives."
imply that you would push the person with a large back-pack down the bridge to stop the train in the second of the original cases? Just curious.

No.

And, I think it is compatible with Ralph's proposal that agential involvement is only weakly bad-making.

Maybe so. I wonder what Ralph thinks.

That's a great series of comments! Here are some brief replies:

1. Jussi -- I basically agree with everything that you say, except of course for your first point where you criticize "maximizing". In my view, it would be a perfectly natural way to Jay Wallace to express the conclusion of his reasoning about how to spend the evening by saying "I guess I should go to see Rocco and his Brothers". Of course, it wouldn't in any way be morally wrong of him not to go to see Rocco and his Brothers; no one would be entitled to blame or resent Wallace if he didn't go. But in my view, what I call the "practical" or "deliberative" use of 'should' and 'ought' is often used to express the conclusions of practical deliberation about what to do -- even if there's no question of there being anything morally wrong with one's not doing what one concludes that one should do.

2. John -- You seem to be representing a hardline consequentialist position. I would say that if I don't save the baby, I have behaved despicably, but I haven't killed the baby (and the criminal law would agree with me, by the way): my failure to save the baby is an omission, not an action. So even if it is right to say that my omission was a "cause" of the baby's death, we can't say that any action of mine caused the baby's death.

On your other point, of course, the issue of which is the lesser of two evils is one factor in deciding what it is right to do, but as a non-consequentialist, I don't regard it as the only factor.

3. Mike Valdman -- My intuitions seem to differ from yours. I think that it's even worse if I have to manipulate my victim through an obstacle course in order to get the trolley to smash into him. I would be even more horrified by this course of action than by the action of the person who simply pushes the bystander into the path of the trolley.

4. Mike Almeida -- I think I agree with you that (b) is better than (a). But the additional agential involvement is precisely not agential involvement in causing harm. It is agential involvement in saving Smith from harm. So this additional agential involvement is actually a feature that makes (b) better, not worse!

5. Jussi again -- I don't know what you mean by "weakly bad-making". At all events, it seems to me that the intention or attitude that an act expresses is a feature of the act itself. (In social interaction, it is usually the most important aspect of the act in question!) So the fact that an act results from so much thought and effort being directed towards someone's being harmed is a feature of the action, and so may well be a feature that makes the act itself a worse thing to do -- not just a feature that reveals something about the moral character of the agent.

Ralph, thanks. I thought you might say something like this,

It is agential involvement in saving Smith from harm. So this additional agential involvement is actually a feature that makes (b) better, not worse!

I think I agree with that. But can't we say the same thing about the multiple loop trolley case? In manipulating the trolley's course, can't we describe that as "agential involvement in saving five lives"? And wouldn't that make the additional involvement a good-making (or a better-making) property?
So, I'm asking this: why we are free to describe my case as including involvement-in-saving-from-harm and not also free to describe the multiple loop cases as including involvement-in-saving-from-harm?

Ralph
Thanks for responding to my comments. You are correct that I am a consequentialist; for some reason, try as I might, utilitarianism always seems to reassert itself. But I keep looking at alternatives. Anyway, I would argue that a person deciding not to act is itself an action and that is what would make that person responsible for teh death of the baby. I think that often the distinction between killing and allowing to die is a false distinction and begs the question towards one position oever another, but that is another story.

I was wondering if we modify the example that you presented and place the innocent person on an unused siding, if your postion changes if a person then has to pull the switch and send the trolley down the unused track, resulting in the death of one innocent person but saving the other five innocent people? Would the principle of double-effect work here to justify pulling the switch in so far as the death of the one person is not the cause of saving the other innocent lives but is simply a foreseen, but unintended, consequence of pulling the switch?

1. I have just realized that I never replied to Mark or to Christian's second comment. So here is a quick response to them:

What Mark and Christian say is quite right. To develop these ideas in more satisfactory detail, I ought to come up with a wider range of cases. In particular, just as Christian says, I ought to try to produce a trolley case in which (i) the agential involvement is high, thus contributing to the badness of the act, (ii) the responsibility is low, given some excuse or mitigating factor, while (iii) the act is also impermissible. In addition, just as Mark says, I ought to try to produce some cases in which the agent has a greater degree of agential involvement with the good outcomes of an act than with the negative act, or vice versa. I'm fairly confident that I can do both of these things, but perhaps I should do it another PEA Soup post (rather than burying it deep in this comments thread).

Mark is of course quite right that one of the main goals of my suggestion is to unify the acts / omissions distinction with the doctrine of double effect (DDE). The general idea is that where one intends a certain outcome of an act, one is even more agentially involved in bringing about that outcome than if one merely foresees that outcome, or if the outcome results from an omission rather than an act.

2. Mike Almeida -- In your case (b), I am not agentially involved in harming anyone; I am agentially involved in saving the one (by communicating with him to get him out of the way of the trolley), and also in saving two of the five, but not in harming anyone. In the Multiple Loop Case, I am agentially involved in saving the five, as well as in harming the one; this definitely makes my action much better than if I were agentially involved in manipulating the trolley through the complicated array of railway junctions so that it smashes into the one without any agential involvement in any good outcome at all. Still, because in the Multiple Loop Case, I am agentially involved in killing the one as well as in saving the five, whereas in your case (b) I am only agentially involved in saving people and not in harming anyone, I have no difficulty in saying that diverting the trolley is clearly better in your case (b) than in the Multiple Loop Case.

3. John -- The case that you describe sounds rather like the original trolley problem case, as described by Philippa Foot et al. You're completely right that in this case, diverting the trolley appears to be allowed by the doctrine of double effect (DDE). But as Mike Otsuka and others have argued, in the Loop case, the trolley's smashing into the one is intended, and not merely foreseen, and so diverting the trolley does not seem compatible with the DDE in the Loop case; in my terms, there seems to be a higher degree of agential involvement in causing harm in the Loop case than in the original trolley case.

Ralph,

about this:
"So the fact that an act results from so much thought and effort being directed towards someone's being harmed is a feature of the action, and so may well be a feature that makes the act itself a worse thing to do -- not just a feature that reveals something about the moral character of the agent."

The reason I'm starting to get worried about this is the following. The question we pose with the trolley-problems seems to be crucially a practical question from an agents deliberative perspective. It's essentially what ought *I* do in a situation like this.

I like the idea that from this perspective we go about answering this question by starting from another question - what would be the best thing for me to do. This question directs our attention to the many features that make the options more or less good or bad for me to take. But, it seems odd that in this situation I would be required to take into account the level of agential involvement as a feature that makes my actions good or bad. That seems to be a wrong kind of consideration. It does seem like an odd question to ask in the deliberation - how much my agency would this involve when I would do that.

Rather, it looks like that the different options I have open for me come already with different levels agential involvement. Then we look for considerations that would make that level of agential involvement the best option for me in the situation. When we look for the considerations our gaze seems to be essentially directed outwards and not to our own role. Being worried about that seems oddly fetishistic. I'm sorry that this is a bit vague.

Ralph
Am I correct in assuming that from your point of view the level of agent involvemnet is determined by the degree of intentionality of the agent to bring about the 1st event in a series of events? If this is correct then consequentialism might work in some cases, DDE in others, and some deontological principle in the ones that have very high degrees of agent-intentionality. It seems that in some cases we might look at diverting the trolley as a response to a situation we did not create that might be understandable in consequentialist terms, but where we have to bring about an event to cause other events we are not responding but creating the situation and in these types of cases consequentialism has less intuitive force. This seems to be in line with some of the research being done by some experimental philosophers when they test the intuitions of the 'folk' regarding intentionality and moral responsibility.

Thanks for a thought provoking post.

Jussi -- You say:

"When we look for the considerations our gaze seems to be essentially directed outwards and not to our own role. Being worried about that seems oddly fetishistic."

But I put it to you that only explicitly consequentialist deliberation is "directed outwards and not to our own role" in your sense. Any deontological view is going to make it necessary for the deliberator sometimes to ask herself, "Will this harm come about through an act of mine?" I.e., any deontological view will require the deliberator to be concerned with "her own role", as you put it.

Of course, some consequentialists have accused deontologists of being fetishistically concerned with "keeping their hands clean". But deontologists should just reject this accusation. There is nothing fetishistic about being concerned with one is doing, and with what one's role is in the world. The idea that this is fetishistic is simply consequentialist delusion. (I have my ideas about what explains this delusion, but I'll leave explaining these ideas for another occasion.)

hmh. This claim that "any deontological view will require the deliberator to be concerned with "her own role" cannot be right. There has to be some non-agent-neutral-value based considerations out there too that can be thought of and which make a difference to how good the options are for the agent to take. The moral standing of the other persons involved must be one such consideration, and what our relation to them is another. Would they be treated as mere means and not as ends in themselves as they deserve? What kinds of rigth they have in the situation they are in? Are they innocent bystanders or threats? Are they someone who are under threat that we can direct to harm different people? Whatever our answers to these questions are, they surely have bearing on how good different options are for us. And, all of these are deontological considerations that don't bear solely on how good the world will be for the World Agent as a result. But, none of them are considerations about what our agential involvement would be.

On the contrary, it is your claim that "cannot be right". Every single one of the considerations that you mention -- at least as they are normally understood -- concerns the agent's role. To comply with the requirement that one should not violate people's rights, or that one should treat people as ends and never merely as a means, one must ask oneself, "How am I treating these people?", "Do my actions violate their rights?", "Does the way in which I am treating them express a proper respect for their humanity?", and so on. I.e., one must ask oneself questions about one's agential role in relation to those people.

Of course, we could understand these considerations in a different way. We could advocate a view to the effect that the right act must minimize the number of rights-violations that occur in the world as a whole or maximize the extent to which in the world as a whole persons are treated as ends and not merely as means or the like. But this is a consequentialist view, not a deontological view. This was shown by Nozick a long time ago, when he pointed out that this consequentialist view (Nozick called it a "utilitarianism of rights") implies that it is right for you to violate people's rights whenever violating their rights would lead to fewer rights-violations in the world as a whole.

According to a genuinely deontological view, on the other hand, you shouldn't violate someone's rights even if your violating those rights would lead to fewer rights-violations in the world as a whole. As Parfit rightly said, such a deontological view gives different people different aims. It gives me the goal of ensuring that I do not violate people's rights, and it gives you the goal of ensuring that you do not violate people's rights. In general, the goal that any deontological theory gives you is a goal concerning your role in the world, while the goal it gives me is a goal concerning my role.

Ralph:

I really like this:

"But in my view, what I call the "practical" or "deliberative" use of 'should' and 'ought' is often used to express the conclusions of practical deliberation about what to do -- even if there's no question of there being anything morally wrong with one's not doing what one concludes that one should do."

I'd love a reference if you've written on this. My reaction to the trolley car cases have been different than yours. To make a long story short, I'd distinguish between acts that are wrong and acts that ought not to be done. We have mixed intuitions. Sometimes we consider the way a decision is made and assess the action on the basis of how the decision to perform that act was made. Perhaps this is close to what people call an agent-relative, or partialist view. However, we also have intuitions to the effect that we asses an act by the relative goodness or badness of the outcome. And perhaps this is what people mean by calling Consequentialism an impartial view, or a view from nowhere.

To the point: I take considerations concerning agential involvement in causing harm to concern how we evaluate the agent, not the outcome, and so it's relevance to the wrongness of an act, not to whether the act ought not to be done. Agential involvement, insofar as I have any intuitions with respect to it, seems to me to concern those ways of deliberating you mentioned above. If I tripped and pushed a man on the track, only to be run over by the trolley and save five people, I was very causally involved. If I push the man on purpose I'm also very causally involved. Would you count such cases, supposing we describe the movements of the actors as relevantly similar, as identical with respect to the amount of badness the agent, in each case, contributes to his options? I'm assuming the answer is no, though correct me if I'm wrong.

If I'm not wrong, then it seems that agential involvement in causing harm, whatever it ends up being, will include the intentions of the agent as a key component. It is for that reason that I think intuitions with respect to agential involvement support only character appraisals, or on my own view, appraisals of an act as wrongly performed rather than one that ought not to be done. Simply, the more wrapped up in a situation an agent is, the more the agent must deliberate the effects of his actions, and hence, his intentions are more involved, he is less ignorant by virtue of being wrapped up in the situation, etc...which is to say the agent is more deserving of blame for carrying out the action, which is to say what the agent has done is more likely to be wrong, where 'wrongness' expresses an act appraisal with an eye to the blameworthiness of the agent who has performed it.

In general, the goal that any deontological theory gives you is a goal concerning your role in the world, while the goal it gives me is a goal concerning my role

But you might have forms of indirect consequentialism, right, that give everyone the same goal but count as deontological. The goal might be maximizing overall utility and the rules for doing so (as Mill more or less urges) might be maximize for the circles you run in. That should count as deontological, I think, since the collective pursuit of our goal might make that goal less-well-achieved. In short, the theory can run into PD's and, so, self-defeat.

I don't get this. Sorry - I am a bit thick occasionally. Say that I'm facing the person on the bridge and I could push him down to stop the trolley to save the five. I ask myself why I shouldn't do this. Here's two possible answers:

1. The person has some features because of which she deserves not to be killed in order for some other people to be saved.

2. I would not be causing harm and doing killing and thus wouldn't be agentially involved. This would make the action better for me and therefore I ought not to push the person down.

I thought 1 was about the other person and her features as the source of the reason for me and 2 was about me and my features as the source of my reason for me. 1 seems outward looking where as 2 looks inward looking. Of course, both these reasons have non-consequentialist consequences for what I ought to do. But, given what you say, it seems like there is no room to make this distinction that seems very intuitive to me. You seem to imply that when I think of 1 in the situation I'm still concerned about my role.

I guess I wanted to suggest that for the deontologist there has to be way of locate the sources of requirements to different places. I thought that the sources of requirements must be identical with whatever makes the action good for me to do. The agential involvement you introduced seemed to locate a source/good-maker in the intentions, thought, effort, and amount of causal manipulation done by the agent. These considerations seem rather different from the rights, moral status, humanity, etc. of others. Maybe there is no way to do articulate this difference.

1. Christian -- I've written about the meaning of 'ought' in a paper called "The Meaning of 'Ought'", Oxford Studies in Metaethics vol. 1 (2006). There's an expanded and improved statement of those ideas in Part I of my forthcoming book, The Nature of Normativity (Oxford UP, 2007), esp. Chapters 4-5.

You draw a distinction between (i) evaluating an action on the basis of how the decision to perform the act was made, and (ii) assessing the act by the relative goodness or badness of the outcome (by which I assume that you mean, "relative to the goodness or badness of the outcomes of the available alternative acts"). I agree that this is a real distinction; but I don't think that it is the same as the distinction between agent-neutral (consequentialist) and agent-relative (non-consequentialist) views of what is morally right or wrong. (This is because I believe that there are agent- and time-relative sorts of goodness and badness, and I also believe that the notion of the "outcome" of an act must include the act's logical consequences, and so the occurrence of an act is part of that very act's outcome.) In fact, I think that your distinction articulates what I would call the difference between the course of action that is "rational" and the course of action that is "objectively right".

Anyway, I want to repeat what I said earlier: I'm proposing that the degree to which the agent is "agentially involved" in causing a harm is a feature that contributes towards determining the act's "objective rightness", not just its "rationality". So I think I can't accept the suggestion that you're making.

2. Mike (Almeida) -- You're quite right. What you say reveals that the notion of a theory's "giving one a goal" is trickier than Parfit noticed. I think it's clear that there is going to be a sense of "giving someone a goal", though, according to which the sort of indirect consequentialism that you have in mind gives each person a different goal: it gives you the goal of maximizing for the circles you run in, and it gives me the goal of maximizing for the circles that I run in. It would be a thoroughly worthwhile project to analyse these different senses of "goal" but it would take us too far afield right now.

3. Jussi -- Of course you're quite right that we can express these deontological considerations by talking about how "the person has some features because of which she deserves not to be killed in order for some other people to be saved." But I claim that once we think through what these deontological considerations really involve, it turns out that they do not purely concern the person, but rather the relation between the person and various agents who might treat them in various ways. If these considerations really only concerned the person, and not the relation between the person and the agent who is treating them in various ways, I fail to see how they could fail to lead to a picture like Nozick's "utilitarianism of rights".

You say that the "agential involvement" that I introduced seemed to locate the good-maker in the intentions, thought, effort, and amount of causal manipulation done by the agent, and that these considerations seem rather different from the rights, moral status, humanity, etc., of others. But I was referring to the agent's "agential involvement in causing harm to his victim". So I was always focusing on a relational feature of the agent's action -- a feature that consisted in the relation in which agent puts himself in to the person that he is affecting.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


Disclaimer

  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.