A lot of moral theorists are sceptics about Act Consequentialism (AC). For example, some of these sceptics think that we need to qualify AC with the following two agent-relative elements: (i) “deontological constraints”, which forbid us to do certain horrible acts even if those horrible acts would make the world as a whole a slightly better place; and (ii) “agent-relative prerogatives”, which allow us sometimes to pursue our own personal projects or commitments even if we thereby fail to make the world as a whole as good a place as we could have done. However, a lot of these sceptics think that AC gets it exactly right about our positive duties of beneficence, such as our duties to help those who are in need.
This seems to me a half-hearted compromise. Those who reject
AC, I suggest, should reject it root and branch. So I am drawn towards a more thoroughgoing
nonconsequentialism, according to which absolutely no moral duties – indeed, absolutely no reasons for action at all –
are agent-neutral in the way that AC thinks of our moral duties as being. On
this approach, then, the moral duty to help those in need would not just
consist in morality’s giving me the aim that those who are in need are helped. It would consist in morality’s
giving me the aim that I play a role in helping
those who are in need. In that sense, this approach makes this duty agent-relative.
Suppose that from an impersonal, agent-neutral point of view the world as a whole will be equally good whether I play a role in helping those who are in need or not. (Suppose that if I don’t help those in need, others will step up and do the helping instead of me, etc.) According to AC, in this situation, I have no compelling moral reason to help, since the world as a whole will be equally good whether I do any helping or not. Indeed, from a moral point of view, I should be quite indifferent between my helping and my not helping. According to an agent-relative interpretation of the duty to help, on the other hand, I should not be indifferent. I should have a definite preference for the state of affairs in which I play a role in helping those who are in need, even if the world as a whole is not a better place as a result.
I haven’t begun to work this idea out in detail, but I suspect that according to the most plausible way of developing this more thoroughgoing nonconsequentialist approach, reasons for action are always reasons for the agent to put herself into the right sort of relationship with the intrinsic values that are at stake in her situation – where the “right sort of relationship with these intrinsic values” may be a non-harming relationship, or a protective relationship, or a creative relationship, but never just the simple relationship of “promoting” that is favoured by AC.
I can't tell whether you mean it to be a feature of the view that it has the consequence that if I were about to help someone in need whom you could equally well help, you would have a reason to push me aside so that you could do the helping. In any case, this does seem to be a consequence, and it seems rather implausible.
(Honeymoon's over, Wedgwood.)
Posted by: Jamie | February 12, 2007 at 07:29 PM
I also wonder how much of this would be an objection against AC. There are many ways in which AC theorists incorporate agent-relative constraints and prerogatives. One way to do this is the Smith way of indexing value to agents. So, what I ought to do is to maximise value[Jussi] where you ought to do maximise value[Ralph]. If that's possible, they can also agent-relativise the duty of benificence. They can say something like me helping others has consequences that are of more value[Jussi] than your helping them in my deliberative perspective whereas in your perspective your helping others maximises value[Ralph]. There are problems with agent-relativising value but it doesn't look like your idea greats a new problem.
Posted by: Jussi Suikkanen | February 12, 2007 at 07:37 PM
I wouldn't say that in this case, I'd have a reason to push you aside in this way. (I don't agree with Joseph Raz et al. in thinking that whenever you have a reason to pursue a goal you have a reason to take any means that would achieve the goal; you only have a reason to take good means to that goal.)
Still, I would say that in this case, I might well have a reason to try to persuade you to let me do the helping instead of you, or at the very least to help you do the helping.
Posted by: Ralph Wedgwood | February 12, 2007 at 07:37 PM
Jussi, let me clarify: by AC I meant classical AC, according to which the sort of good or value that is to be maximized is itself an impartial and agent-neutral value.
I agree with you that if we relatize the value that is to be maximized, then a "consequentialist" (or as I would prefer to say, "teleological") approach could accommodate my idea about agent-relative positive duties.
Posted by: Ralph Wedgwood | February 12, 2007 at 07:55 PM
Ralph,
I agree with this part:
reasons for action are always reasons for the agent to put herself into the right sort of relationship with the intrinsic values that are at stake in her situation
But "the right sort of relationship" might mean butting out. Right?
Posted by: Robert Johnson | February 12, 2007 at 09:08 PM
I don't think I'm latching on to the spirit of this idea.
Okay, so you wouldn't push me out of the way just to be the helper, but you'd try to persuade me to let you do the helping. (I wonder what you'd say.) What if you see me reaching for the life preserver to throw into the water (there's a drowning violinist). If you sprint at top speed, you can probably grab the life preserver before I get to it and thus be the one to toss it into the choppy brine.
I find it very difficult to believe that you have any reason to do that.
Posted by: Jamie | February 12, 2007 at 09:25 PM
Jamie,
What about this watered-down version? If the suggestion is merely that I not be indifferent to my role in the beneficent action, then the non-consequentialist (of the sort we're imagining) need not construe the duty as an obligation to directly do the saving. Instead, the obligation could be to take stock of the situation and act appropriately, where acting appropriately might include not only directly saving, but also, given certain circumstances, letting others do the saving (esp. where doing so is more efficient). (I guess this is similar to what Robert has suggested?)
I'm not sure that this preserves everything behind the idea that "I should have a definite preference for the state of affairs in which I play a role in helping those who are in need, even if the world as a whole is not a better place as a result," since it gives up the agent always having a preference that she play a role in helping. But it seems consistent with the agent having some sort of play-a-role preference. (And having it even when this doesn't change the outcome.)
Posted by: Josh Glasgow | February 12, 2007 at 10:22 PM
Ralph,
I'm interested to know how extensive you think our agent-relative duties to help are. If I have reason to prefer that I, rather than others, help those that I can, then it seems that I'll have reason to prefer a state of affairs in which I sacrifice a great deal (say, to help the very needy) to one in which the same amount of help is provided through shared efforts involving less individual sacrifice. This suggests that your view may be even more demanding than standard act-consequentialism.
Posted by: Brian Berkey | February 12, 2007 at 11:45 PM
It's very hard for me to see how you can set up cases to decide between versions of NC where one component is aiming for the agent-neutrally better options within a conception where you have other goals as well, and the version you seem to favor. For surely on any conception where my goal is my bringing about what we might otherwise think of as the neutral good, my bringing it about will either be better for some others or worse.
Sometimes my putting myself in relation to this good in such a way that I bring it about will take some burden off someone else and that will be better (neutrally). Or, if I shove someone aside or do the job less well than another (in which case I hinder the seemingly agent-neutral good), it will be worse. What it looks like you need is a case where what the agent-neutral good (that is the good as proposed by the NC theory with constraints and permissions tacked on to pursuing the a-neutral good) is equally well served either way, but on way of doing it puts me in the right relation and the other way does not. And then I suppose we are supposed to think that still we have a considered conviction that we ought to aim at ourselves standing in the right relation.
I'm not sure if my skepticism about coming up with the right kind of case has to do with the lateness of the hour, or just the difficulty of constructing the case . . .
Posted by: Mark van Roojen | February 12, 2007 at 11:52 PM
Jamie and Robert -- Even if I have an agent-relative duty to help, I also have a duty to treat other agents with courtesy and respect! Given my (non-Razian) view of reasons, that's enough to explain why I have no reason to try to frustrate Jamie's efforts to help the drowning violinist. Instead, I should "butt out", as Robert says; that is the right sort of (non-interfering) relationship for me to put myself in to the intrinsic value of Jamie's virtuous action. Even in this case, though, it may be appropriate for me to feel me least some mild degree of regret for the fact that I didn't help the drowning violinist.
Josh and Brian -- I used the phrase 'play a role in helping' precisely in order to allow that it may be principally by my shouldering my fair share of the burden of a cooperative activity of helping that I can best fulfill this agent-relative duty of aid. (Indeed, I suspect that this sort of reference to cooperative activity is going to play a large role in any plausible account of our moral duties.) I don't envisage this approach as being as demanding as AC because it will presumably also contain "agent-relative prerogatives" as well as agent-relative positive duties.
Mark -- You're right that it won't be easy to come up with a case that (i) clearly reveals the difference between the view that there are agent-relative positive duties and all rival views, and (ii) clearly elicits a considered conviction in favour of the former view. To mount a convincing case for this view, it will probably be necessary to argue for it as part of a more general argument for the agent-relativity of all reasons for action. All that these cases can show, I suspect, is that agent-relative positive duties are a viable theoretical option, not that they're clearly preferable to every possible alternative view.
Posted by: Ralph Wedgwood | February 13, 2007 at 04:45 AM
P.S. Let me answer Jamie's parenthetical question: "What could I say to him to persuade him to let me do the helping instead of him?"
The answer is: I would try to persuade him that he does already done his fair share of helping recently and deserves a rest, whereas I haven't done my fair share of helping yet.
(This doesn't show that agent-relative positive duties are reducible to agent-neutral duties to promote fairness, because I should still have a special concern with whether or not I am doing my fair share of helping.)
Posted by: Ralph Wedgwood | February 13, 2007 at 06:16 AM
Ralph,
Nice post. I have three questions:
(1) Is the view teleological? You say some things that suggest that it is, such as,
This sounds teleological, because it sounds like we should first rank possible outcomes (broadly construed to include the acts themselves) according to which we should prefer over which others, and, second, we should perform the act that produces the outcome we should prefer above all other available alternatives.
(2) Is the view time-relative as well as agent-relative? Should I act so as to fulfill a current promise if this will prevent me from fulfilling two future promises? Also, in those instances where “the world as a whole will be equally good whether I play a role in helping those who are in need or not,” should I step up now to play a role in helping others even if this will prevent me from being able to step up more numerous times in the future?
(3) Suppose that we each have an equally compelling reason to ensure that utility is maximized. Is this, on your view, a case where we all have the same agent-neutral reason or a case where we all have equally compelling, agent-relative reasons to ensure that the same state of affairs obtains? So what exactly is the distinction between agent-relative and agent-neutral reasons on your view?
Posted by: Doug Portmore | February 13, 2007 at 07:53 AM
Ralph, I don't get it. It was no part of the story that I had already done my fair share and you haven't. Our situations are symmetric, so we are in competition. This is an essential feature of agent centered reasons: they can lead us to compete.
I agree that we might be duty-bound to compete in a polite and respectful way. But again, it was no part of the story that it would be rude or disrespectful for you to grab the life preserver first. And it still seems an absurd thing for you to do.
Josh:
I don't know, it's too hard for me to see what this view amounts to. At the extreme, of course, one could always just take the intuitively right thing to do, in each situation, and claim that the agent had an agent centered reason to do that -- I'm sure there will always be a 'role'.
There are lots of good cases to be made for agent centered reasons. There are the break-one-promise-to-prevent-five-breakings examples, some good 'fair share' examples, and the asymmetry that Doug Portmore and Ted Sider wrote about, between self-sacrifice and other-sacrifice. I haven't yet seen Ralph's case for the agent-centered only view.
Posted by: Jamie | February 13, 2007 at 08:24 AM
Ralph,
I thought so that you have the classic AC in mind. However, I wonder if this makes the objection even more problematic. Part of the classic AC is to be more of the foundationalist bend and ignore our common sense agent relative intuitions. So, well-being is of ultimate value, it ought to be maximised and everyone's counts the same. Of course this view has all sorts of counterintuitive implications from scapegoats to not saving friends. But, so much worse for the intuitions says that classic ACer. That would also go for your intuition (which I think I have too).
If the act-consequentialist wants to incorporate our moral intuitions, then she is bound to do some radical revisions in the axiology. At that point, she will no longer remain a 'classic ACer' and she has all sorts of ways to incorporate your intuition. So, I'm not sure I see how the objection gets grip of AC.
Posted by: Jussi Suikkanen | February 13, 2007 at 09:20 AM
Doug -- Here are my answers to your questions:
(1) I'm happy to present this view in a teleological form (although I wouldn't regard the teleological formulation as giving an explanation of the deontic formulation, just as a reformulation of it).
(2) If we present this view teleologically, we would indeed probably have to make the value to be maximized both agent- and time-relative.
(3) My idea is that as Parfit puts it, agent-relative theories give different agents different aims, aims that can conflict with each other.
Jamie -- I'm afraid that I don't get why you think it would be an "absurd thing for [me] to do" to throw the lifebelt first. I have conceded that it would be absurd if my doing so disrespectfully interferes in your activities, by deliberately frustrating a rational attempt that you are already making to achieve one of your permissible (indeed admirable) goals. But otherwise, if there's nothing disrespectful or rude about my throwing the lifebelt (and I don't owe you any apology or explanation for my doing so), what would be "absurd" about it? Saving people from drowning isn't an "absurd thing for me to do", even if someone else would do the saving if I didn't.
You're right that agent-relative reasons, by their very nature, can conflict. If the case is perfectly symmetric, then there probably isn't anything that I could say to persuade you to let me do the helping instead of you. Still, I shouldn't be indifferent between your helping and my helping: I should prefer the outcome in which I do the helping (although of course I shouldn't act on this preference in any way that is disrespectful towards you, e.g. by pushing you aside, as you suggested).
Jussi -- I think your point is answered by what I said in reply to Mark above.
Posted by: Ralph Wedgwood | February 13, 2007 at 09:47 AM
Suppose I can either play a role in bringing about a big good (but my role will be superfluous in bringing about that good--the good would occur without my help) or play a role in bringing about a smaller good (where the smaller good would only happen if I play that role). Does your view say that in some such cases I have most reason to join the already sufficient group which is creating the bigger good?
Posted by: David Sobel | February 13, 2007 at 09:48 AM
Ralph,
The only absurd part is your sprinting ahead of me to be the first to the life preserver. This strikes me as pointless, and so literally absurd. If I think of us as each 'wanting to be the hero', then it makes sense. But that doesn't seem to me to be a moral motivation at all, and that I get to be the hero does not seem to be a moral reason.
I think I might get much clearer on all this when I see the answer to David Sobel's question.
Posted by: Jamie Dreier | February 13, 2007 at 10:36 AM
Ralph,
I don't find it plausible to suppose that it would be wrong for me to refrain from stepping up now to play a role in helping others if so refraining would enable me to more often step up (in like circumstances) to play a role in helping others. This intuition makes me think that the teleological formulation is giving an explanation of the deontic formulation. But I gather you must have different intuitions. Is that right?
Regarding agent-relativity, I was asking what you take to be the distinction between agent-relative and agent-neutral reasons, not what you take to be the distinction between agent-relative and agent-neutral theories. The latter doesn't, to my mind, obviously help me to understand the former. And, since you claimed that there are no agent-neutral reasons, I want to understand what you mean by that claim.
Posted by: Doug Portmore | February 13, 2007 at 11:10 AM
Ralph,
One way of distinguishing agent-neutral theories and agent relative theories is via same goal vs. different goal. A theory is consequentialist (in the old sense that I like) if it gives all of us the same goal, and non-consequentialist if it sometimes gives agents different goals. If we extend that to talk of agent-neutral vs. agent relative reasons, it looks like we might want to say that a reason to act is agent-neutral if (to the extent that it gives agents reasons to act) it gives them all reason to act in pursuit of the very same goal.
If that's right (and there may be an obvious problem with this way of thinking about it that I'm missing), then it looks like your thought is that even when it looks like agents are acting on an agent-neutral reason, they really are not because the ends of their actions are different. But I'm having trouble seeing how benevolence (say) would be best thought of in such a way that the fact that some person would be better off if they had food generates different goals for agents. And even if it did, I have even more trouble seeing that it would not in some good sense also generate the same goals for the agents. So even if Fred's needing food generates the goal in me of my helping Fred to get food, it also seems to generate the goal that Fred get food (and not merely because the latter is a necessary constraint on my achieving the former).
Is there something I'm missing?
Posted by: Mark van Roojen | February 13, 2007 at 12:14 PM
Mark -
I don't think trying to make the distinction in terms of goals is deontology-friendly. If we're all teleologists, and think that what we ought to do is driven by certain goals, then yes, we can fret about whether we all get the same goal or some of us get different goals.
You say that the relative duty would be given by the goal that I help Fred, and point out that there should still be the goal that Fred gets helped. If I'm Ralph, I'd resist the first claim. I have a duty to help Fred, not because of a further goal that is assigned to me, of Fred getting helped - that would be the teleological explanation - but that is where theory bottoms out, or because it follows from a principle or because it follows from the balance of my reasons to act, or something like that.
The teleological picture looks like it starts with ordering states of affairs or propositions - things like that I help Fred and that Fred gets helped, and then inducing an ordering on actions, for me - things like helping Fred and making sure Fred gets helped. I took the 'theoretical motivation' behind Ralph's suggestion to be that deontologists should not think of things in some hybrid way, but rather the other way around. We simply start with duties for actions, which we might think of as properties.
On this picture, helping Fred, which is an action - the property that I have when I am helping Fred - is the object of a duty. It is the same duty for everyone - which is why I don't like calling it 'agent-relative'. It also fails McNaughton and Rawlings' test for agent-relativity, because their test requires that all duties be, at bottom, to make some proposition true, and that is what this picture denies. So it doesn't satisfy any standard definition of agent-relativity. Nevertheless, when we think of things in this way, it's easy to explain all of the familiar so-called 'agent-relative' phenomena: constraints, options, and so on.
But still, along with helping Fred, making sure Fred is helped might also be a duty. And so might not interfering with someone helping Fred. I'm doubtful that once making sure Fred gets helped is acknowledged as a duty, Ralph will be able to construct any cases that can distinguish this view from the other.
But the duties it postulates are all still relative in that they take actions as their objects, which are understood as a kind of property, rather than beginning with goals or objectives which can be defined in terms of propositions or states of affairs. And the interesting thing about such 'relative' duties, is that they allow us to account for constraints and options by appeal only to neutral obligations, in the original sense of the term - obligations that are the same for everyone.
Posted by: Mark Schroeder | February 13, 2007 at 01:38 PM
Thanks, Jamie. Right, that (what I said above) isn't going to provide a defense of the agent-centered only view. It's just meant to de-fang the kind of counterexample to that view that we were discussing.
And, right, more would need to be said to flesh out the role for us to be completely satisfied.
Posted by: Josh Glasgow | February 13, 2007 at 02:04 PM
Thanks Mark, that helps. I've even argued in print that reconstructing deontology/non-consequentialism in the teleological way is not non-consequentialist friendly, so I should accept your point on that score.
Still, here is what got me asking in that way about Ralph's proposal. I think that it is relatively natural even within a non-consequentialist framework to ground some duties in the thought that doing an action of the type picked out by the duty would make things better - better from a perspective that is available to others who are not the agent. For example, that more people would be well off in such and such a situation can ground my duty to do an action which brings about that situation. Even if what that fact gives me a reason to do is an action of mine (and hence is relative to me), there seems to me to be a good sense in which the reason is or might be called agent-neutral. I was partly trying to figure out if Ralph was denying even that.
Posted by: Mark van Roojen | February 13, 2007 at 04:24 PM
David and Jamie:
Although I should probably let Ralph answer for himself, I'd suggest the following example. Suppose I live in a very cold part of the world, and I am volunteering to take part in the building of homes for the homeless. I assume that this is a way I can play a role in bringing about a pretty important good. When I get there I realize that the house would be built even if I were to go away (and let's assume that there'd be no significant delay, or any significant burdens to others). I also realize that my neighbour's son left this plastic ball outside in the cold, and if I don't put it back in for him it'll be deflated, and it'll end up costing my neighbour $1 to buy a new one. I could now go back home so as to arrive in time to save my neighbour's kid's ball, or help out building the houses (assume I live about an hour away from the construction site, so it makes no sense for me to get the ball and come back). It seems to me not terribly counterintuitive to say that I ought to stay and help build the houses. I am not sure that Ralph actually wants to say this about this case, and, of course, there are other ways to get the same result, but this might be a case in which the theory could have the implication that David suggests without doing too much violence to our intuitions (David didn't say that it would but I sensed, perhaps wrongly I admit, a suspicion that this might not be a very desirable implication of Ralph's view).
Posted by: Sergio Tenenbaum | February 13, 2007 at 07:41 PM
I like Sergio's case, and it's not hard to grant its supposition. It sounds totally reasonable to me, and like it illustrates that the reason to help build the house does not derive merely from a reason to make sure that the house is built - though you may also have that reason.
Posted by: Mark Schroeder | February 13, 2007 at 09:53 PM
I agree with Sergio that there would be some reason to stay and help build the house, but I doubt that it has anything to do with being the cause of an important good's being provided oneself. Surely there are other more plausible reasons in the neighborhood (if I can put it that way). First, it goes with the very idea of helping that you would be part of a team for this good, so you would be expressing solidarity with them, etc. You would also be helping THEM, so you would be easing their burdens ("many hands make light work"), raising their spirits, etc.
Posted by: Steve Darwall | February 14, 2007 at 07:48 AM
I like Sergio's example too. But I wonder if Ralph would want/need to say that one should join/stick with the group working for the bigger good even when working for the smaller good is no less onerous and so not plausibly seen as shirking hard work that someone needs to do.
I also quite like Darwall's reply to the example. If we could contruct a case where team spirit is screened off, that would help isolate the wanted case. So perhaps let there be two buttons one will push in the privacy of one's own home (and must be quiet about which one one pushes). Supposing that we can arrange a case in which one both knows that the bigger good is already going to be created without one's "contribution" and the smaller good will not happen without one's contribution, and the agent's action nonetheless counts as part of the cause of the bigger good, then I find it odd to think one ought to join in causing the bigger good.
Posted by: David Sobel | February 14, 2007 at 08:36 AM
I've lost track of what Sergio's example is supposed to show. I understand the reasons that Steve mentioned. I understand also a reason of fairness: Sergio has a fair share of house-building to do, which won't be paid off by someone else doing it or by Sergio saving his neighbor's ball.
But how are these things related to Ralph's idea that there are no agent neutral moral duties?
Posted by: Jamie | February 14, 2007 at 09:13 AM
Great discussion. Could we get clear on what is meant by agent-relative and agent-neutral duties? The question is also to Ralph - what did you mean by agent-relative in the post?
Here's one more suggestion that I quite like. It's from Mike Ridge:
'If the principle reflecting the reason makes an ineliminable (and non-trivial) back-reference to the person to whom the reason applies then the reason is a personal (agent-relative) one: otherwise it is impersonal (agent-neutral). For example, the principle that an agent has reason to maximise *her own* happiness is agent-relative, as is the principle that agent must promote the welfare of *her* friends. On the other hand, the principle that one has reason to maximise happiness, and the principle that one should maximise friendship are both agent-neutral."
This fits quite well Ralph's version of the agent-relative duty that I put myself in the helping position. Maybe there are reasons that require this in some occasions. Sergio's example seems to be one such case even with Steve's characterisation of the reasons.
But, I wonder is it always the case. Cannot there be cases where it would be indifferent whether I help or others help? I think I'll be a pluralist about this and accept that depending on cases there are both types duties of beneficence - agent-relative and agent-neutral. AU probably should agree with this. On occasion, it must be the case that *my* helping others even when others could do the same maximises happiness.
Posted by: Jussi Suikkanen | February 14, 2007 at 09:41 AM
Jamie,
Isn't this it: Sergio's example shows that, contrary to what AC says, I should not be indifferent between helping and not helping, even though it would be very slightly better overall if I did not. The idea (I take it) is that I would have some special relationship to helping (set aside why) that is not explained by enhancing the total outcome of what I do.
Posted by: Robert Johnson | February 14, 2007 at 09:58 AM
Suppose Sergio is right - screening off things like solidarity. The case suggests that the reason to help is weightier than the reason to ensure that help is given. But if that is right, then that would seem to suggest that the former can't be wholly derivative from the latter.
I'm still not clear on the exact content of Ralph's positive claim; if we interpret it the way I suggested above, I don't think it will lead to any normative differences in cases. But one thing seems clear: on the sorts of standard hybrid views that Ralph meant to be criticizing, I take it, the former reason is wholly derivative from the latter - you basically have reasons to ensure that good things happen, controlled by some constraints and some prerogatives. So the example looks relevant, to me, to testing that view.
I agree that it's hard to screen off where else the extra reason to stay and build might come from, though. I want to compare an ordinary decision whether to go to the polls on election day, rather than David's button-pushing example, but that might bring in other complications.
Posted by: Mark Schroeder | February 14, 2007 at 10:02 AM
Ah! Robert beat me to the punch line. Jussi - there are multiple ways of precisely defining agent-relative reasons that have to do with 'essentialy pronominal back-reference', and none of which are theory-neutral.
Nagel's way is this: first assume that all reasons are reasons to bring about some state of affairs. Then look for the weakest modally sufficient condition for it to be the case that X has a reason to do A - that is, the weakest condition such that necessarily, if X satisfies that condition, then X has a reason to do A. If that condition is a pure Cambridge property - being such that P, for some proposition P, then the reason is agent-neutral; otherwise it is agent-relative.
McNaughton and Rawlings' way is this: first assume that every moral theory is committed to being able to formulate its most basic requirements as principles of the following sort: for all agents x, there is a reason for x to bring it about that Fx. If Fx is open in x, then the reason is agent-relative, otherwise it is agent-neutral.
Everyone talks about how agent-relative reasons have to do with pronominal back-reference, but few are clear on the fact that these are quite different distinctions. In Nagel's distinction, the pronominal back-reference comes in the statement of the sufficient condition for the agent to have a reason, whereas in McNaughton and Rawling's case, the back-reference comes in the statement of what the agent has a reason to bring about.
I don't think either of these definitions is that helpful, because neither is theory-neutral. Nagel's requires us to assume that the only reasons there are, are reasons to bring about some state of affairs. If we don't assume that, then we get a perfectly good distinction, but it has nothing in particular to do with constraints or options. McNaughton and Rawling's requires us to assume both that the basic reasons are to bring about some state of affairs and that all reasons are derivative from reasons that are reasons for everyone. I don't think either of those assumptions is true.
Basically, my own view is that when people talk about agent-relative reasons, the best way to understand them is as talking about the kinds of thing that create trouble for classical (agent-neutral) consequentialism. We all know what those are, so that gives us a sense that we've succeeded at giving a neutral characterization of what agent-relative reasons are. But in fact, I think we haven't; we've merely succeeded at showing how one would characterize them if one went in for certain background assumptions.
Posted by: Mark Schroeder | February 14, 2007 at 10:23 AM
Mark,
that's good and interesting. I agree that Nage's and Rawling&McNaughton's definitions seem problematic. Maybe, I'm not sure why Ridge's account is in a need of more precise definition. I'd think that *ineliminable* back-reference is rather precise test for agent-relativity. Seems like a good test to ask whether a given principle can be formulated in a way that does not require using such pronouns in defining the actions, the goals or the source of reasons that refer back to the person whose duty is in question. I'm not sure what theory this line assumes and why it doesn't work as a test. But, maybe I'm missing something.
Posted by: Jussi Suikkanen | February 14, 2007 at 10:51 AM
What a torrent of fantastic comments -- thank you so much, guys!
Here are some very brief and superficial responses to a few of those comments.
1. I guess what's bothering Jamie in his case must be this. He assumes that I don't like sprinting, and that he won't have to sprint quite so fast to reach the lifebelt in time; so if I sprint to get to the lifebelt first, I am wasting resources that would be more efficiently used if I let him throw the lifebelt instead. But then of course this isn't a case where he and I are symmetrically related to the situation; and so it's not exactly the same as my original case.
Of course, if I have a reason to prefer that I am an active helper in the symmetrical case, it will also be plausible that I have a reason to prefer that I am an active helper in a slightly asymmetric case as well (such as Sergio's case). I'll comment on what to make of such asymmetric cases when responding to David's point.
2. I think that I may have misled Doug with my parenthetical remark that I wouldn't regard the teleological formulation of the view that I was suggesting as giving an explanation of the deontic formulation. In my slightly idiosyncratic view, the word 'ought' is multiply context-sensitive and expresses many different concepts in different contexts, and the construction 'S has a reason to ___' is itself a weak sort of 'ought'. So by 'deontic' I wasn't referring to 'moral wrongness'. I agree that since the duty of beneficence is an imperfect duty, it's not "morally wrong" to refrain from helping every person whom one could possibly help, so long as one does one's fair share of helping overall.
3. Mark van Roojen raises an excellent problem for my definition of "agent-neutral reasons". I will have to think about what exactly I ought to mean by my claim that there are no agent-neutral reasons for action. I guess that what I had in mind was something like this. Every reason for action involves a reason for pursuing some ultimate aim; and whenever one has a reason to do anything, the fact that grounds that reason grounds a reason to pursue an agent-relative ultimate aim. (An agent-relative aim is, roughly, an aim that the agent would naturally specify using the first-person pronoun.) So even if the fact that grounds the reason grounds a reason for pursuing an agent-neutral ultimate aim, the reason in question will be agent-relative if this fact also grounds a reason for pursuing an agent-relative ultimate aim.
4. Mark Schroeder is completely right that the exact way in which one distinguishes between "agent-neutral" and "agent-relative" will itself depend on one's other theoretical commitments; and the way in which I draw the distinction depends on a number of views that he rejects -- such as my view that 'ought' is fundamentally a propositional operator, and my views about how the concepts that can be expressed by 'reasons' are related to the concepts that are expressed by 'ought'. Since Mark and I are on opposite sides of those debates, I don't expect him to like my account of the "agent-neutral" / "agent-relative" distinction!
5. David raises a crucial question. At the risk of being a cryptic, I'll tell you about the view that I'm currently trying to explore. According to this view, there are actually two sorts of reasons to help. Somewhat tendentiously, I'll label these reasons the "moral" reason and the "pre-moral" reason.
The pre-moral reason is simply a reason to play an active role in helping people, because helping people is an intrinsically worthwhile thing to do. The moral reason is a reason to contribute towards creating the best collective activity of helping that one can, and then to shoulder one's fair share of the burdens of that collective activity. Whether a collective activity of helping counts as the "best" is not an agent-neutral matter, but is in determined by the number of participants in the collective activity and those participants' pre-moral reasons. In particular, efficiency at advancing the aims that correspond to these participants' pre-moral reasons is one crucial determinant of whether or not this collective activity counts as "best".
In David's case, I'd say that given that the large good will be created anyway whether one pushes the button or not, we have a better pattern of collective activity if you push the other button to create the smaller good. So one has a moral reason is to create the smaller good rather than the larger button. (Thus, in the end, I agree with Steve's interpretation of Sergio's case.)
Posted by: Ralph Wedgwood | February 14, 2007 at 12:48 PM
I find this:
"even if the fact that grounds the reason grounds a reason for pursuing an agent-neutral ultimate aim, the reason in question will be agent-relative if this fact also grounds a reason for pursuing an agent-relative ultimate aim."
quite strange.
Take Mona Lisa's smile. That grounds a reason for pursuing an agent-neutral ultimate aim that the painting is preserved for the future generations. It also grounds a reason for me for pursuing the agent-relative aim of *me* seeing the painting. Now, on your criterion Mona Lisa's smile would therefore be an agent-relative reason. Does that mean an agent relative reason tout court? That sounds odd. I'm not sure what a reason tout court would be.
I mean it does look like a good agent-neutral reason to preserve the painting and maybe an agent-relative reason for me to see it (I'm not even sure about this). But, why say it is one or the other on the whole? I also don't think that in this case the reason there is for ensuring the preservation of the painting implies that I play a certain active role in preserving the painting.
Posted by: Jussi Suikkanen | February 14, 2007 at 01:56 PM
Jussi -- You're right: My response to Mark van Roojen seems too half-baked. Oh well....
Perhaps what I need to say is that we have an agent-relative reason if the relevant fact grounds a reason to pursue an agent-neutral ultimate aim only because it grounds a reason to pursue an agent-relative ultimate aim as well? (There are lots of promissory notes here about what I mean by "grounding a reason" and by an "ultimate aim".)
Posted by: Ralph Wedgwood | February 14, 2007 at 02:06 PM
I am sorry if this is, strictly speaking, off topic after Ralph's post. Robert and Mark already said most of what I want to say, but here are a couple of further thought. My example just meant to challenge the idea that a theory is counterintuitive insofar as it implies that in some cases, people ought to help in bringing about a greater good that is going to be brought about anyway rather than bringing about a smaller good that will not otherwise be brought about. As I pointed out in the post, there might be other ways to account for the case (I was thinking about something along the lines of Steve’s account of the example. I also agree with Steve that “being the cause of an important good's being provided oneself” is an implausible description of the reason I have to help in that case or, for that matter, of any reason to help in any case. But I think one can accept the implication without accepting this way of describing the reason.) and so the example won’t show that the implication is unavoidable. But I agree with David that we should ask whether the implication will strike us as odd in every example for which we screen off team spirit.Unfortunately, I agree with Mark that is not clear that one can screen off every other possible reason. Now, first, if in my original example, I just heard on the radio that a certain group was going to build houses for the homeless, and I decided to join in, I am not sure there are reasons of fairness or team spirit to help building the house, but I don’t think it changes the original intuition in the case (but again, I am not denying that there might be still other ways to account for the example). I am not sure what to say in David's example. Some ways of filling the example I am simply not doing anything when I press the button to “help” bring about the greater good. When I try to fill in the example in a credible form, I don’t have the intuition that the implication is odd, but I am not sure I have in my hands a really “pure” example. Say, for instance, a rich person will make an enormous donation to Oxfam if at least 1,000 people press the button “help” on his website. Suppose I go the website and I immediately realize that there is no doubt in my mind that more than 1,000 people will press the button. At the same time I see at the corner of my computer screen my program that shows random web cams all over the world. Let’s assume that if I press a button in the program it sends an instant message to the owner of the web cam alerting him to check the web cam. And suppose that I see a ball in the same situation as the original example and the homeowner in third floor surfing the net oblivious to the fate of the ball. Finally, I realize that the battery in my laptop is running out and I can only click the mouse once before it shuts itself off. It’s not clear to me that in this case I ought to press the web cam button rather than the “help” button.
Posted by: Sergio Tenenbaum | February 14, 2007 at 03:10 PM
That would be one way of doing it. But, the consequences for your original thesis seem difficult. You wanted to argue that the duty of beneficence is agent-relative. I take it that in that case the ultimate agent-neutral aim is that there is less suffering in the world and the suffering of others is a reason for adopting this aim.
If your new definition of agent-relativity holds, then the existence of this agent-neutral duty (and reason) would be conditional on the fact that the suffering of others grounds a reason for me to adopt an agent-relative aim that I put myself into certain active position within the project of helping others. Somehow that seems like putting the cart before the horse. It seems like there is a reason for suffering to be alleviated first and maybe then also a derivative duty for me to play a special role in this. But that has to come second.
Posted by: Jussi Suikkanen | February 14, 2007 at 03:27 PM
Just to report my intuitions about Sergio's new (and quite delightful) case: I think that he should click on the web cam button rather than the "help" button.
On Jussi's point: I would agree that there is an intrinsic disvalue in suffering, and this sort of disvalue gives everyone a reason to hope that the suffering is alleviated; and on my own view, such intrinsic values and disvalues are what ground the reasons for action (since reasons for action are reasons for the agent to put herself in the right sort of relationship with these intrinsic values and disvalues). But a hope is not an aim: an aim is the content of an intention (or at least a tentative intention). On the view that I'm suggesting, I have a reason to have the aim that suffering is alleviated only because I have a reason for the aim that I actively contribute towards the alleviation of suffering.
Posted by: Ralph Wedgwood | February 14, 2007 at 03:42 PM
I also think Sergio should click the webcam button.
I'll go further: I don't see what reason he has to push the help button.
Remove the certainty that at least a thousand people will push their help buttons and you change the situation, though. In that case it seems to me that he should push the help button. (Well, of course, he might be certain that fewer than 500 people will push their help buttons, in which case I again think he has no reason to push his.)
Posted by: Jamie Dreier | February 14, 2007 at 03:59 PM
Maybe David's example can be rephrased like this. Suppose we have a choice of worlds. On one world (World-the-first), we get a big good. On another world (World-the-second) we get a big good, plus a small good. What world would you pick?
Picking these worlds is just what we really do do in the case that David has raised. We can make the world be either like world-the-first or like world-the-second. It is up to us.
This is what the example reduces to, it seems to me. And it also seems to me to be obvious that we should pick world-the-second. World-the-second, of course, just is the world where we go get the ball out of the ice, and do nothing about the housing project and it gets built anyways.
Posted by: Peter Jaworski | February 14, 2007 at 04:03 PM
This:
'I have a reason to have the aim that suffering is alleviated only because I have a reason for the aim that I actively contribute towards the alleviation of suffering.'
is very clarifying. It's the 'only because' here that I cannot get myself to accept. For me, the suffering of others in itself seems to be non-conditionally a sufficient reason to adopt the agent-neutral aim. Could be that it also gives me a reason to actively contribute but that doesn't seem to be here or there with regards to adopting the general aim. There is something fetishistic about the 'only because' here.
Posted by: Jussi Suikkanen | February 14, 2007 at 04:24 PM
Great discussion, Ralph. Excellent start.
Jussi - I think that Ridge's definition, at least as you've stated it, is imprecise, because I think it accurately describes both the Nagel and the McNaughton/Rawling distinctions. Both are about ineliminable pronominal back-reference to the agent, but they differ with respect to where this ineliminable back-reference has to occur - for Nagel it is in the sufficient condition (which he claims is the reason), whereas for McNaughton and Rawling it is in the thing the reason is a reason to make the case. Since they're not the same view, and your description of Ridge's distinction describes both of them, I take it that Ridge's distinction as you describe it is imprecise.
As I said before and Ralph repeated, in order to distinguish between agent-relative and agent-neutral reasons by appeal to pronominal back-reference, we need to make certain background assumptions, in order to have canonical statements of the reasons that will have such back-reference in the right places.
Suppose that I don't have such a view. Suppose, for example, that I think that reasons are considerations (which I take to be facts) which count in favor of actions (which I take to be properties that agents can have). Among the reasons, I might hold on this view, is the fact that murder is wrong. That, I think, counts in favor of the following action: not murdering. Not murdering, on this view, is a property. It is a property that you have whenever you are not murdering.
This view is only schematic, but it appears to allow for agent-relative reasons that are described in ways that make no pronominal back-reference to the agent. The reason itself makes no such reference, because it is just the fact that murder is wrong, which makes reference to no agent. And the thing the reason is in favor of makes no such reference, because it is just a property: the property of not murdering. Nothing in the description of this property makes reference to the agent (though of course, it is the kind of property that is had by an agent - but on this view of reasons, all reasons are in favor of such things). So it looks like a view on which agent-relative reasons can be formulated without pronominal back-reference to the agent.
The moral is, whether a formulation or description of some reason has certain features depends on what we take to count as a formulation or description of the reason. And that depends on more than bookkeeping; it depends in part on substantive questions about the relata of the reason relation.
Posted by: Mark Schroeder | February 14, 2007 at 04:27 PM
Mark,
thanks that's helpful. I'm not sure why more general distinctions would be more imprecise. I thought it would be a precise test whether the back-reference is made anywhere when you don't take a stand where it is done. I'm not sure how that introduces vagueness. But, never mind - I catch the drift.
I'm not sure why to think that the fact that murder is wrong is a agent-relative reason not to murder.
Posted by: Jussi Suikkanen | February 14, 2007 at 06:31 PM
Mark, I don't think I get your idea. On your suggestion, surely the agent-relative/ agent-neutral distinction comes back in as the distinction between a reason for acting so that the property not-murdering is had by *as few agents as possible*, or *agents in general*, and a reason for acting so that it is not had by *this* agent? So we do still need the pronoun (or some sort of indexical, at least) to specify the A-R reason.
Ralph (and everybody), I struggle with the whole idea of an A-N/ A-R distinction anyway. I think reasons for action are ways of justifying action, and that justifications have to be general in form to be intelligible at all. "Because she's my wife" is, on its own, no more intelligible than "Because I'm me". What makes it intelligible is the continuation that Williams rejects: "and in situations like this, it is permissible for any agent to rescue his wife". (Slot in the criterion of rightness/ decision procedure distinction as per usual.) Similarly, what (normally) rules out "Because I'm me" is the continuation we can most naturally add: "and in situations like this, it is permissible for me to prioritise myself".
I think this requirement of generality holds for moral reasons because I think it holds for reasons in general. Reasons for belief, whether that means reasons about evidence or reasons about inference, have to be general too: "p because it looks like p to me" doesn't state an intelligible reason for asserting p unless the background allows us reasonably to spell the reason out with "...and in situations like this there is epistemic permission for any observer to accept his/ her appearances".
So moral justifications can't, any more than any other sorts of justifications, make basic and ineliminable reference to particular agents. So there can't be a *deep* A-R/ A-N distinction for moral justifying reasons. The most there can be is justifying reasons with this sort of reflexive pronoun in them: "For any agent, that someone else is *his* wife is a reason for *him* to treat *her* as special". But those are not ineliminable pronouns in (what I take to be) the required sense; they're bound variables.
Posted by: Tim Chappell | February 15, 2007 at 11:01 AM
Jussi - I didn't say that the Ridge definition as reported by you was vague; I said it was imprecise, and supported that claim by pointing out that it accurately describes each of two inconsistent ways of making the distinction.
In any case, it's neither necessary nor sufficient that 'pronominal back-reference' occur just anywhere. The view I offered shows that it is not necessary. On that view, there is a reason for everyone to not murder - which is 'agent-relative' in the sense that it results in a constraint, assuming that it is a weightier reason than the reason to prevent murders, and constraints are paradigmatic of 'agent-relative' phenomena. But there is no 'pronominal back-reference' anywhere.
It's also not sufficient. Suppose that basic reasons take propositions, but need not be reasons for everyone. And suppose, further, that necessarily, it is sufficient for any agent, X, to have a basic reason to make it the case that X does A, for arbitrary A, that X can bring it about that P by doing A, and that no weaker condition is such that necessarily, any X who satisfies that condition has a reason to make it the case that X does A. And suppose, finally, that Jon can bring it about that P by not murdering. It follows that Jon has a basic reason not to murder. But this reason of Jon's not to murder is not agent-relative, in the appropriate sense, because he only has it because given his circumstances, that is the way for him to bring it about that P. Yet there is pronominal back-reference all over the place.
In fact, the problem is precisely that there is too much pronominal back-reference. Allowing for it in the specification of the proposition to be made true undermines the ability of its presence in the sufficient condition to distinguish between cases that give rise to difficulties for consequentialism and those that do not. That is why, in order to make the distinction, Nagel specifically assumed that the basic reasons can only be in favor of propositions, and understands them only in ways that don’t allow that they might be open in the agent-variable. (In fact, if I remember correctly, it is basically this objection that McNaughton and Rawling raise to Nagel's definition in the process of motivating their own – ignoring the fact that he is making this assumption.) But on the other side, allowing for back-reference in the sufficient conditions for basic reasons undermines McNaughton and Rawling's definition, too. That's why they have to assume that no theory will allow basic reasons that are reasons only for some people. They assume that the basic reasons will have to be reasons for everyone, and necessarily so.
I’d encourage you to spend some time studying how Nagel’s definition works and thinking about why he does it that way. Few works in moral philosophy in recent decades repay study more than The Possibility of Altruism. McNaughton and Rawling’s original paper is also enlightening. And I’d encourage looking at two of my papers, as well: 'Reasons and Agent-Neutrality' and 'Teleology, Agent-Relative Value, and ‘Good’.'
Tim - I didn't mean to say that there was no agent-relative agent-neutral distinction on the way of thinking about reasons that I articulated; just that it wasn't tracked by any of the standard definitions. I do think that each of your ways of re-describing the view that I articulated are ways that I would reject. The view isn't teleological, so it denies that it is a reason to act so that anything. It is just a reason in favor of an action.
Also, you're right: in both of the precise definitions of the agent-relative/agent-neutral distinction, there is no 'ineliminable reference to particular agents'. There is essentially only a bound variable in a general statement about reasons.
Finally, there are two importantly different ways of understanding your claim about generality. One way that justifications might always have to be general is that nothing can be a reason for someone to act unless that person is in some circumstances, such that there is a universal generalization to the effect that anyone in those circumstances has that reason to do that thing. This is, from my point of view, a relatively innocuous claim about generality. But another way that justifications might always have to be general, is that you might think that nothing can be a reason for someone to act unless it derives from a reason for anyone to act, together with circumstances that determine how they, in particular, are to do it. The latter idea is much stronger, and I think there are lots of reasons to reject it. McNaughton and Rawling need it in order to state their version of the distinction.
Posted by: Mark Schroeder | February 15, 2007 at 11:27 AM
Thanks, Mark, that's interesting and helpful. Brief responses:
1) Your reasons in favour of action, which aren't "so that" anything (non-teleological reasons). Do you think all reasons are like this? Surely some reasons are "so that" something (teleological)?
2) 1)'s teleological/ non-teleological distinction about reasons surely cross-cuts the A-R/ A-N distinction: non-teleological reasons can be either A-R or A-N, and so can teleological reasons (if there are any). Yes?
3) You say: "in both of the precise definitions of the agent-relative/agent-neutral distinction, there is no 'ineliminable reference to particular agents'. There is essentially only a bound variable in a general statement about reasons." I reply: Maybe you don't dispute this, but I thought some people did. I thought the idea in a lot of A-R theorists' minds (probably including Ralph's, though I haven't had time to read the whole of this thread to be absolutely sure of that-- I'm just getting this from his opening remarks) was that there *were* reasons which made IRTPA. Or, more radically-- and again, this seems to me to be Ralph's view-- there's the idea that *all* reasons make IRTPA. That's the idea I can't make sense of.
4) You say: "another way that justifications might always have to be general, is that you might think that nothing can be a reason for someone to act unless it derives from a reason for anyone to act, together with circumstances that determine how they, in particular, are to do it. The latter idea is much stronger, and I think there are lots of reasons to reject it."
I agree, provided we're talking about the same phenomenon. As I read your remarks, firefighters are a counterexample to the thesis that we need this sort of generality. That there is a fire gives a firefighter reasons to act that don't "derive from a reason for anyone to act". The firefighter's reasons to deploy his special expertise now derive from reasons that *any firefighter* has, not reasons that *any person* has.
Of course, the last-quoted phrase is ambiguous, because it depends what we mean by "derive": of course we could try to derive the idea that society should have firefighters from "reasons that anyone has", but that would be a different sort of job.
If Rawling and McNaughton need to deny this sort of point about firefighters to state their distinction, then they certainly are in trouble... but I'm not sure they do.
Posted by: Tim Chappell | February 15, 2007 at 12:00 PM
Mark,
thanks. About the necessity side - when you said first that the fact that murder is wrong is a reason not to murder you said nothing about constraints. That is a substantial addition. The consequentialist is bound to say that the reason not to murder and to prevent murders are the same reason - that it is wrong, i.e., not maximising utility. If you accept the contraint, then I take it that the back-reference creeps in - you ought to make sure that *you* don't murder even if by doing so others murder more.
About the sufficient side here:
"And suppose, further, that necessarily, it is sufficient for any agent, X, to have a basic reason to make it the case that X does A, for arbitrary A, that X can bring it about that P by doing A, and that no weaker condition is such that necessarily, any X who satisfies that condition has a reason to make it the case that X does A. And suppose, finally, that Jon can bring it about that P by not murdering. It follows that Jon has a basic reason not to murder. But this reason of Jon's not to murder is not agent-relative, in the appropriate sense, because he only has it because given his circumstances, that is the way for him to bring it about that P."
I find this slighly too concise. I'm not sure where the non-trivial, ineliminable back-reference happens. Say P is 'that general happiness is maximised' (could be anything else that doesn't mention Jon) and Jon can bring about this by not murdering. Seems like you don't have to refer back to Jon when you say that he has a reason not to murder because as a result general happiness is maximised. It's true that he has this reason because of the circumstances he is in but in describing the circumstances we don't need to refer to him. As the back-references can be eliminated in the schema, don't we get the correct result that the reason is not agent-relative?
I am a big fan of Possibility of Altruism too. It is a while since I've read Piers's and David's paper but I will get back to it.
Posted by: Jussi Suikkanen | February 15, 2007 at 12:26 PM
Tim -
1) Yes, on the view I was articulating but not defending, the reason relation holds between considerations (facts) and actions (properties) - not things that can be the case.
2) Yes, the distinctions cross-cut. I was just explaining that there is no standard way of even making the AR/AN distinction that is theory-neutral.
3) My point all along was that I think we can make sense of Ralph's idea without going in for any of these ways of making the distinction. Since there is no single way of making it that everyone could agree on, we have to. Think of agent-relative phenomena as just those which are inconsistent with classical consequentialism in virtue of the fact that it appeals to a single ordering on states of affairs - the better than ordering - in order to explain what people ought to do. Then Ralph's suggestion is that we understand positive duties to be that way, too - to have a structure that is inconsistent with classical consequentialism in virtue of the fact that it appeals to a single ordering, etc. We don't need anything else to make sense of it, and you'll notice if you read through the thread that except for my explanations to Jussi, pretty much everyone else has been taking more-or-less this as their criterion for what would be agent-relative or not.
4) Think of the 'another way' like this. Maybe firemen have a reason to hang out at the firehouse because that is their job, and everyone has a reason to do their job. That same reason is where my reason to show up for my department meeting this morning comes from - it's my job. If I understand them correctly, McNaughton and Rawling need to think that things work like this, because it is in the formulation of these most basic reasons that we have to look for agent-variables, in order to see whether they are agent-relative or agent-neutral. If you don't make this assumption, then you get the problem I explained to Jussi in my last comment. If you're curious about where this view of generality might come from or what motivates it, you should read my paper, 'Cudworth and Normative Explanations'. It's an old idea that Cudworth, Clarke, and Price took to be very important, which was important for Prichard, and which drives lots of interesting, important arguments, including Korsgaard's argument against 'voluntarist' theories in Sources. So if you think it's false (and I agree), that's a problem for lots of interesting views.
Posted by: Mark Schroeder | February 15, 2007 at 12:31 PM
Hello again Mark:
Suppose I deny (as indeed I do) that there is any such thing as a "a single ordering on states of affairs - the better than ordering - which can be appealed to in order to explain what people ought to do." You're saying that that denial alone gets me straight to an agent-relative conception of value and our reasons?
I don't see at all why that should be so. Suppose I deny the consequentialist ordering because I accept a different ordering: I say that there are actions which are forbidden, actions which are compulsory, and actions which are optional. (As it happens, this is what I say, in various publications...) None of this commits me one way or the other about the agent-relative vs. the agent-neutral. How could it?
One reason why I'm a wet blanket about A-R vs A-N is because I think it's a red herring (mixed metaphors-- donchaluvem?). I don't think the A-R/ A-N distinction is the consequentialist/ non-consequentialist distinction. The latter distinction is about whether we should just promote values, or also respect them (where respecting is not, as Pettit thinks, an agent-relative notion).
If you want a reference for that, see my "A Way Out of Pettit's Dilemma", PQ 2001...
Posted by: Tim Chappell | February 15, 2007 at 12:51 PM
Jussi - On the necessity side, you're misinterpreting the view I suggested. The view is not that the fact that murder is wrong is a reason in favor of its being the case that I not murder. The view is that the fact that murder is wrong is a reason in favor of not murdering. You get a constraint if this reason is significantly weightier than the reason in favor of preventing a murder. Why? Well, what should I do, if I have a choice between murdering and allowing two murders to happen? Well, in favor of murdering there is the reason to prevent murders. But against murdering there is the reason to not murder. But that reason is weightier, so all things considered it is wrong to murder. So that's a constraint.
On the sufficiency side: as I said, you can make up different stories about what a canonical statement of a reason looks like. All I did here was to incorporate the relevant assumptions from both Nagel and McNaughton and Rawling that allow agent-variables to show up in the places they need. So it was part of my stipulation that the way the view in question thinks about reasons is in part precisely like Nagel does: it has to be the complete modally sufficient condition for it to be the case that someone has a reason to do that thing. So this condition is not just 'that general happiness is maximized', because there are worlds in which general happiness is maximized but Jon does not have a reason to do this thing (choose a different action if it makes it easier to imagine). So to get a modally sufficient condition, you need the condition, 'by doing A, Jon can ensure that general happiness is maximized'.
Anyway - and this is just an aside - there aren't global, agent-neutral facts of the form, 'general happiness is maximized'. There could always be more happiness, if you just imagine that the world is a little bit bigger. 'that happiness is maximized' is surely just short for 'that of the options available to Jon, he takes the one with the highest prospect of happiness', or something like that. Which, again, has an agent-variable in it.
And finally - still on sufficiency - eliminating the agent-variable from the sufficient condition is not enough, because the way I formulated the reason, there is one in the thing he has a reason to make the case, as well - my point was that agent-variables appearing in both places that Nagel and McN and R allow for undermines each of their definitions. So there is no single definition that subsumes them both.
Posted by: Mark Schroeder | February 15, 2007 at 12:53 PM