In an earlier post, I claimed that Regan’s Utilitarianism and Co-operation is important because it shows that in order for a moral theory to be plausible it must require of agents something beyond just the performance of certain actions—in other words, it must not be exclusively act-orientated. But, as Matt suggested in his comment, the idea that moral theories should require of agents something beyond just the performance of certain actions isn’t exactly groundbreaking. After all, lots of moral theorists have claimed (and well before 1980) that agents are required not only to perform certain actions but also to possess certain virtues, have certain motives, internalize certain rules, apply certain decision procedures, etc.
Now let’s consider The Buttons example from my earlier post. It seems a mistake to think that Uncoop has both a fundamental obligation to push and a fundamental obligation to desire that he and Coop cooperate. For Uncoop gets no brownie points for pushing without also desiring that he and Coop cooperate. What we need to say, then, is not that Uncoop has fundamental obligations both to push and to desire that he and Coop cooperate, but that Uncoop has a fundamental obligation to see to it that he and Coop both push and, consequently, derivative obligations both to push and to desire that they cooperate. Actually, on Regan’s view (viz., co-operative utilitarianism), what Uncoop (and all other agents) have a fundamental obligation to do is to apply a certain decision procedure (where applying this decision procedure involves more than just performing certain actions). It’s a complicated procedure, but the general idea is that each agent should (1) hold “himself ready to do his part in the best pattern of behavior for the group of cooperators,” (2) “identify the other agents who are willing and able to co-operate in the production of the best possible consequences,” and (3) “do his part in the best plan of behaviour for the group consisting of himself and the others so identified, in view of the behaviour of non-members of that group” (Regan 1980, pp. x and 135). So, for Regan, the obligation that Uncoop has to push is a derivative obligation. He has this obligation only because he has an obligation to correctly apply a decision procedure that when correctly applied directs him to push.
So what I think that Regan’s arguments show is that no moral theory that treats obligations to perform acts as fundamental as opposed to derivative will be able to avoid counterintuitive implications in examples like The Buttons. To my mind, this is truly groundbreaking, because it seems to me that most contemporary moral theories treat obligations to perform actions as fundamental obligations and not as derivative of fundamental obligations to, say, apply a certain decision procedure or (on my own view) to secure a certain possible world (the one where Uncoop and Coop both push).
I'm skill skeptical that the claim that all obligations to perform actions must ultimately derive from some obligation to X, where X is something other than the performance of an action is groundbreaking. It may be controversal and heterodox for its time but the Stoics seem to me to have endorsed something like that view as does W.D. Ross.
The DVD example, in fact, seems a lot like Ross's example of returning a book to a friend.
[He argues that his obligation to bring about the state of affairs that the book is returned rather than to take the actions that lead to that state of affairs]
Posted by: Marcus Hedahl | August 16, 2013 at 11:49 AM
Hi Doug, I'm not so sure about the claim that (even on Regan's view) an agent can't get some "brownie points" for acting fortunately but without going through the correct decision procedure.
Suppose that Uncoop will push the button, despite not going through the cooperative decision procedure. Then Coop (if a successful/omniscient CU agent) will also push, and thereby secure the best possible outcome. Coop is the better agent, as he fulfills his decision-procedure obligations and so helps to secure the best outcome. Uncoop has acted in a merely fortunate way, which happens to allow Coop to secure the full 10 points instead of the 6 that would otherwise be available (if both not-pushed), though Uncoop's choice was not sensitive to these facts. Still, there's something good about Uncoop's act (in the context of Coop's cooperation). It has some positive moral status that his not-pushing would lack.
Perhaps you'd want to say that the "brownie points" here are merely evaluative, rather than deontic? Maybe that's right, I'm not too sure what hangs on that distinction in this context. In any case, it's very different from the CD-ejection case where there's nothing to be said for the "merely instrumental" action in the absence of the end to which it is instrumental.
Posted by: Richard Yetter Chappell | August 16, 2013 at 12:04 PM
Hi Marcus,
There are certainly a lot of people who think that some obligations to perform acts derive from other obligations to perform acts. For instance, a lot of people would say that my obligation to drive the book over to you myself derives from my obligation to act so as to bring it about that the book is returned to you. And, thus, all these same people would say that, in the situation where I can't drive the book over to you myself but can get Smith to do it by paying him five bucks, I have an obligation to pay Smith five bucks to drive it over to you and that this obligation derives from the more general obligation that I have to act so as to bring it about that the book is returned to you.
So does Ross think that, in this sort of case, I have an obligation to see to it that you get your book back by means other than performing certain actions (by say having certain desires). Only then would he have been accepting the idea that I take to be groundbreaking.
In any case, if you have a specific reference for me, I would like that very much. But regardless I'll go back and read my Ross.
Posted by: Douglas W. Portmore | August 16, 2013 at 12:26 PM
Hi Richard,
Fair enough. I take your point. And, on reflection, I think that neither Regan nor I need commit to my off-the-cuff suggestion that you get no brownie points for fulfilling a derivative obligation in order to accept the idea that all obligations to perform acts derive from some obligation to X, where X is not merely the performance of some set of actions.
Posted by: Douglas W. Portmore | August 16, 2013 at 12:30 PM
Marcus: Oh, any specific references to the relevant Stoics would be welcomed as well.
Posted by: Douglas W. Portmore | August 16, 2013 at 12:55 PM
What you write is very interesting, Doug. However I am not following why someone has to argue in that way. Why should not someone say ‘What fundamentally matters is whether human beings have more or less wellbeing. If in situation S1 Stan can do basic movements A1 or A2 and A1 is what brings higher overall wellbeing, then A1 is what Stan should do. There is no distinction between the bringing of higher overall wellbeing, the bringing of a state of affairs of there being level of wellbeing L1 (where L1 is higher than L2 which would have pertained had A2 been done), at future point in time T2 human beings having a higher level of welfare and Stan doing A1. You cannot have one without the other. Thus if one fundamentally matters then so does the other. Thus what fundamentally matters is that Stan does A1 in S1.
However the question of what Stan should think – how he should go about his decision-making – (and in what way Ethel should seek to affect his thinking/decision-making) in order to ensure he does A1 in S1 is a quite different one. It is a derivative one. If what would result in Stan doing A1 in S1 is that he makes himself someone who treats every person as an end rather than someone who tries to maximise wellbeing with each act, then he has a derivative duty to make himself that sort of person (and Ethel should try to persuade him to be that sort of person). But this duty is derived from the duty to do A1 in S1.
Lets assume that similar things can be said about every other situation Stan will be in – or that he is about to die so will only be in situation S1 then die.
Posted by: Dan Dennis | August 17, 2013 at 06:18 AM
Hi Dan,
I think that I agree with everything that you say about the Stan case. I just think that there are other cases that show that our most fundamental obligations are to see to it that certain states of affairs are realized and sometimes the only way to realize a state of affairs is do something beyond just perform a certain action.
So suppose that what fundamentally matters is how much well-being human beings have. Now, take a look at The Two Buttons case from my earlier post. And assume that the numbers in Table 1 indicate how much well-being there will be for human beings. Only if both Coop and Uncoop push will there be 10 units of well-being. If Coop and Uncoop don't both push there will be 6 or less units of well-being. The only way, then, for Uncoop to see to it that 10 units of well-being are achieved is to both push and desire that he and Uncoop cooperate. And the thought is that his obligation to push derives from his more fundamental obligation to see to it that he and Uncoop both push by both intending to push his own button and desiring that he and Coop cooperate.
But maybe you just don't have the intuition that in The Buttons Uncoop should see to it that they both push by both pushing his own button and desiring that the two of them cooperate.
Posted by: Douglas W. Portmore | August 17, 2013 at 12:20 PM
Thanks Doug. I worry that The Two Buttons case is a science fiction example, so perhaps something is entering the reasoning that is impossible, thereby skewing the conclusion. (E.g. perhaps Uncoop’s willing to coop effectively causes Coop to push the button – implying that Uncoop’s willing causes Coop’s action, which is cannot be the case in the real world). Can you think of a non-science fiction example which would lead to the same conclusion?
Posted by: Dan Dennis | August 17, 2013 at 05:47 PM
Hi Dennis,
Just make Uncoop and Coop pokers players. Let Uncoop have a tell that indicates that he's willing to cooperate. Let Coop be aware of Uncoop's tell and adept at detecting it. It's fictional, but it's not crazy, out-there fiction. If you want a non-fiction case, I'm not sure that I can provide one. But I'm fine with fiction.
Posted by: Douglas W. Portmore | August 17, 2013 at 06:48 PM
Hi Doug
In the poker case it is the physical movement, the tell (a twitch say), which then causes Coop to press the button. So Uncoop changes his character in order to generate the tell/twitch. But we know that someone may have to change himself as a means to generating a certain physical movement (e.g. commonly, by being confident of winning the athlete is likely to run faster and so stand more chance of winning). However what is doing the work is the physical movement (the tell/twitch, the running). If it were generated without the character change then that would be equally efficacious.
So perhaps the only reason the sci-fi ‘C on the forehead’ case produces the conclusion you want is because effectively it smuggles in the idea that mental change can have a direct causal effect on the wider physical world (such as Coop’s physical button pushing). Which in fact it cannot. Only bodily movements can. Hence your conclusion is not warranted?
Posted by: Dan Dennis | August 17, 2013 at 07:07 PM
Hi Dennis,
the idea that mental change can have a direct causal effect on the wider physical world (such as Coop’s physical button pushing). Which in fact it cannot.
By 'wider physical world', I gather you mean something beyond the subject who has the mental state. For intentions certainly cause changes in the the physical movements of the given subject. And if my intentions can cause my arm to move, then why do you find it such a stretch to imagine that a desire could cause a change in the pigmentation on one's skin or that a desire could cause some physical involuntary tick of the eye that one cannot reproduce voluntarily? So just imagine that Uncoop has tried to fake his tick, but Coop is too adept at detecting the difference between a faked tick and a genuine involuntary tick.
In any case, all I need is a case where S1's attitude A (such as a desire, belief, or intention) causes some observable manifestation M on S1's body that in turn causes some other subject S2's behavior B, and where there is no way for S1 to cause B except through causing M and there no way for S1 to cause M except through having A.
So if you want let A be Uncoop having some desire or emotion that's detectable via fMRI, let M be a certain readout on the video monitor of that fMRI machine, and let B be Coop's pushing.
But, again, I don't really see any problem with the original example. It's perfectly coherent. There is a possible world in which The Buttons is actual. And the correct moral theory should tell us not only what my obligation are in the actual world but also what my obligations would be in other possible worlds.
Posted by: Douglas W. Portmore | August 17, 2013 at 07:38 PM
I'm having trouble seeing why cultivating and maintaining a certain character or set of habits or desires, or (1) holding "himself [sic] ready to do his part," (2) "identify[ing] the other agents who ...," and (3) "doing his [sic] part in the best plan" aren't actions. They're not very specifically described, but they're still actions.
Posted by: Dan Hicks | August 18, 2013 at 08:44 AM
Hi Dan,
Acts that help to cultivate or maintain certain habits or character traits are, of course, acts. But, on Regan's view, it is not just certain acts that are required. He explicitly says that, on his moral theory, “correct behaviour is required, but certain attitudes and beliefs are required as well” (1980, p. 124). For instance, to satisfy (1) above the agent must be willing to do his part in the best cooperative scheme involving willing cooperators, and to satisfy (2) above the agent must have correct beliefs about who is and isn't willing to cooperate. And, I take it, that having a belief that Uncoop is not willing to cooperate or a willingness to cooperate with Uncoop is not a voluntary act. There may be acts that would cultivate such attitudes, but, on Regan's theory, you're required to have such attitudes and not just act so as to cultivate them.
Posted by: Douglas W. Portmore | August 18, 2013 at 10:11 AM
To pick up a thread about the Stoics: Seneca specifically argues for an obligation to avoid treating slaves cruelly on the basis of the fact that they are human beings, and thus are rational agents like their owners. Though the Stoics didn't go as far as directly challenging the existence of the institution of slavery itself, they certainly seem to be moving incrementally in that direction by endorsing expanded expectations about manumission. See William Stephens' discussion of this in his book on Marcus Aurelius. And of course there is the famous passage in Hierocles about expanding the strength of our moral concerns so that they eventually are genuinely cosmopolitan.
Posted by: Larry Becker | August 19, 2013 at 11:51 AM
Hi Larry,
I'm looking for examples of philosophers (other than Regan) who claim that obligations to perform actions must ultimately derive from some obligation to X, where X is something other than the performance of an action -- e.g., where X is a set of beliefs and desires that one is obligated to have. The fact that human beings are rational agents is a reason not to treat them in certain ways. And from these reasons derives an obligation not to treat them in certain ways. But I'm not seeing how this is an instance of a philosopher holding that the obligation to avoid treating slaves cruelly derives from an obligation to X (where X is something other than an action). If I'm just missing it, perhaps you could say what you take 'X' to stand for in the case of Seneca.
Posted by: Douglas W. Portmore | August 19, 2013 at 12:02 PM
Hi Doug
1) I take it that we agree that there are duties to do certain mental acts, and duties to be a certain sort of person. The question at issue is whether these are always, sometimes or never derived from other duties – in the particular the duty to perform particular actions. You say above ‘I think that Regan’s groundbreaking idea is that there are no fundamental moral obligations to perform actions’. It is that which I am not getting. In the Stan case which I gave above, which you said you agreed with, there are fundamental obligations to perform actions; you then refered to the Scifi example which you took to be one where there are no fundamental obligations to perform actions. Even if this were so, it would only mean that sometimes there are no fundamental obligations to perform actions.
2) Is not Kant an example of someone who thinks fundamental obligations are not to perform actions? Rather one’s fundamental obligation is to have one’s selection of acts governed by the Categorical Imperative, which then implies adopting a set of maxims selected by the various formulations of the Categorical Imperative, and then when in a particular situation selecting only acts which conform to those maxims.
3) I just have difficulty seeing how Consequentialists can pull this off – given Consequentialists are primarily concerned with states of affairs, which in turn are inseparable from act performance (as discussed in the Stan case).
Posted by: Dan Dennis | August 19, 2013 at 04:31 PM
Hi Dan,
Regarding (1): First, let me be clear that I'm working with roughly the following definition of a voluntary act: S's act of X-ing was voluntary if and only if S X-ed as a result of S's intending to X. Now some mental acts are certainly voluntary. For instance, I deliberate about p as a result of intending to deliberate about p. So the act of deliberation is a mental act that is also voluntary. But other mental "acts" are not so much acts as events. As I look at the computer monitor before me, I come to believe that I'm controlling what words appear on the computer monitor. This formation of a belief is an event, but it is not something that I do by intending to do it. I didn't intend to believe this and thereby came to believe it. Rather, I was aware of certain facts and these facts constitute reasons for me to believe that I'm controlling what words appear on the monitor, and being rational I respond appropriately to these reasons by involuntarily forming the belief that I'm controlling what words appear on the monitor.
Now, the question is not, as you claim, whether the "duties to do certain mental acts, and the duties to be a certain sort of person" always, sometimes or never derive "from other duties – in the particular the duty to perform particular actions." Rather, the question is whether, for any duty to perform an act (a voluntary act), that duty must always derive from a duty to X, where X is not itself a voluntary act. That's what Regan thinks and that's what I think. Regan thinks, for instance, that Uncoop has a duty to push and that he has this duty because (1) Uncoop has a fundamental duty to correctly apply a certain decision procedure (and correctly applying it involves not simply performing certain voluntary acts but also involuntarily forming certain beliefs and other attitudes) and (2) Uncoop has a derivative duty to push because he would push if he were to correctly apply this decision procedure. On my view, Uncoop most fundamental duty is to form involuntarily both the intention to push his button and the desire that he and Coop cooperate with each other. And, on my view, Coop has a duty to push (a voluntary act) only because he has this duty to involuntarily form these attitudes which if formed will result in his pushing.
In the Stan case, Regan and I would deny that Stan has a fundamental obligation to push A1. Regan would say that Stan has a fundamental obligation to correctly apply Regan's decision procedure and that Stan has a derivative obligation to push in virtue of the fact that he would push if he were to fulfill this fundamental obligation. And I would claim that Stan has a fundamental obligation to form involuntarily (in response to reasons) the intention to push A1 and that he has a derivative obligation to push A1 in virtue of the fact that he would push if he were to fulfill his fundamental obligation.
Regarding (2): It seems to me that Kant holds that there is an obligation to act in accordance with the categorical imperative. If he thinks that this obligation derives from, say, a fundamental obligation to be the type of person who is governed by the categorical imperative (where this obligation doesn't derive from an obligation to act so as to become such a person), then I would say that he is in the same camp as Regan and myself. But I have to say that it is not clear to me that Kant holds any such view.
Regarding (3): If, as Regan does, hold that the reason that an agent has a fundamental obligation to follow Regan's decision procedure (the CU decision procedure) is because it is only by correctly applying this decision procedure and no other alternative decision procedure that the agent can be sure to produce the best consequences that he and the other willing cooperators can collectively produce, then it sounds consequentialist to me. It's an indirect version of consequentialism, like rule consequentialism, but, given that the deontic status of an action is determined by whether it would be performed by one who correctly applies the decision procedure whose correct application guarantees that one produces the best outcome that is collectively possible, it seems consequentialist to me.
Posted by: Douglas W. Portmore | August 19, 2013 at 05:25 PM
Hi Doug
Interesting thanks. You say you think that ‘for any duty to perform an act (a voluntary act), that duty must always derive from a duty to X, where X is not itself a voluntary act’. What does it mean to say that you have a duty to X when X is not something you can do voluntarily? Presumably X is not something which is going to happen anyway, otherwise it is redundant to talk of having a duty to X. In what sense then does one have a duty to X? Is it the case that the duty in question is the duty to do voluntary act Y (e.g. make yourself someone whose decision procedure is sensitive to reasons and seek out the most compelling set of reasons) which will then result in your doing the involuntary act X (or perhaps more accurately, event X occurring) of your becoming a certain sort of person (one who employs a certain decision-procedure) in response to those compelling reasons?
Posted by: Dan Dennis | August 20, 2013 at 06:27 PM
Hi Dan,
You ask,"In what sense then does one have a duty to X?" ...where, say, X is the involuntary acquisition of some attitude in response to one's reasons for having that attitude.
My answer is: in the ordinary sense. We often say things like the following: (1) You should believe that he's guilty. (2) You should want to be with her. (3) You shouldn't hate him. (4) You shouldn't feel guilty. (5) You shouldn't intend to kill innocent civilians. (6) You should be ashamed of yourself. Etc. All of these seem to express the claim that you have an obligation that you cannot fulfill by intending to do what is obligatory. If you've given me a lot evidence that p, then I should believe that p, but not by intending to believe that p, but by responding appropriately to the reasons that I have to believe that p (specifically, the evidence that you've shown me). We hold people responsible for their attitudes when they have reasons to have (or lack) these attitudes and they have the general capacity to respond appropriately to these sorts of reasons and thereby come to have (or lack) the attitudes that they have good reasons to have (or lack).
Posted by: Douglas W. Portmore | August 20, 2013 at 08:48 PM
Hi Doug
So on your view, what is the point of saying to someone ‘(1) You should believe that he's guilty. (2) You should want to be with her. (3) You shouldn't hate him. (4) You shouldn't feel guilty. (5) You shouldn't intend to kill innocent civilians. (6) You should be ashamed of yourself. Etc.’?
I think the point of saying those things is to encourage the person to undertake those voluntary acts which would result in him 'believing he’s guilty.. etc' – ie to undertake the voluntary act of changing himself in such a way as to become someone whose attitudes are responsive to reasons.
If someone was not able to change himself in those ways, for example someone mentally handicapped, then one would not blame him for having those attitudes any more than one would blame a tiger for eating a child that strayed into its enclosure.
Posted by: Dan Dennis | August 21, 2013 at 06:19 AM
Hi Dan,
I'll stick with belief for the purposes of illustration. The point of saying "you shouldn't believe that" could be to hold them accountable for believing what they have insufficient to believe. Or you might say this as an introduction to citing a series of reasons for why they shouldn't believe that in the hopes that they'll respond involuntarily to the awareness of those reasons by withdrawing their inappopriate belief.
But what does it matter? My claim is that people have an obligation to have certain attitudes even when the only way for them to come to have those attitudes is by responding to their reasons and thereby involuntarily coming to have those attitudes
Posted by: Douglas W. Portmore | August 21, 2013 at 06:08 PM
Hi Doug
What does it mean ‘to hold them accountable’? Does it just mean to point the finger at them, or to have a certain attitude towards them?
It can happen that ‘citing a series of reasons for why they shouldn't believe that in the hopes that they'll respond involuntarily to the awareness of those reasons by withdrawing their inappopriate belief’ works. However it does rely on the person being someone who responds involuntarily to reasons of that type.
But if Albert will only have attitude X by being a person who responds to reasons and seeks out relevant reasons, and Albert will only be a person who responds to reasons and seeks out relevant reasons through voluntarily choosing to (do the act of) making himself such a person then whether Albert has attitude X depends upon his voluntary choice and thus his voluntary act.
For example, a Creationist voluntarily chooses not to have his beliefs about evolution determined by reasons, and as a result does not believe that homo sapiens evolved. Were he to voluntarily choose to change himself, to (do a mental act which makes him) become someone whose beliefs about evolution were responsive to reasons, then he would believe that homo sapiens evolved. So at root whether he believes in evolution depends upon which voluntary choices he makes and thus which voluntary acts he does.
We can then say that Albert and the Creationist ought to perform the voluntary act of making himself someone whose attitudes and beliefs are responsive to reasons – that is a fundamental obligation.
Posted by: Dan Dennis | August 23, 2013 at 05:36 PM
Hi Dan,
I accept the following claim, which I'll call "(C)":
(C) It is possible for a subject S to have had an obligation to have A-ed at t (where A is some attitude, such as the belief that p or the intention to do x) even though there was no prior time (t0) such that S would have A-ed at t had S voluntarily performed some available action at t0.
I believe that many philosophers and laypersons accept (C) either explicitly or tacitly. And I admit that my view that what agents are obligated to do can depend on what attitudes they are obligated to involuntarily form is going to be implausible if (C) is false.
Now, I gather that you're skeptical about (C). I'm not really prepared to defend (C). There's are large literature on this, and I'm not the most up to date on it. What I take to be doing, then, is to be arguing for a view on the basis of a claim (that is, (C)) that many people accept. So (C) is just one of my starting assumptions.
So if you don't accept (C), then my view won't seem plausible to you. Fair enough. But if you have an objection other than (C), I would probably be prepared to defend my view against it. Provided, of course, it isn't just an objection to some other starting assumption of mine.
Posted by: Douglas W. Portmore | August 23, 2013 at 05:58 PM
Hi Doug
Fair enough. But isnt’t C simply the rejection of ‘ought implies can’? I have to say that I get the impression that most laypeople and philosophers accept ‘ought implies can’. I could be wrong of course. The reason I accept ‘ought implies can’ is that to the extent that that a normative ethical theory does not provide guidance for how to act, to that extent it is pointless…
Posted by: Dan Dennis | August 24, 2013 at 08:41 AM
Hi Dennis,
I think that most laypeople accept that 'S ought to do x' implies that 'S would do x if S were to intend to do x', but I don't think that most laypeople think about, let alone accept, the view that 'S ought to A (where A is some belief, desire, or intention)' implies that 'S would A if S were to intend to A'. I suspect that what 'S ought to A' implies is something like 'S has the mental capacity to A, is moderately reasons-responsive (in J. M. Fischer's sense), and would A if she were to respond appropriately to her reasons'.
Note that if you insisted both that (1) 'S ought to do x' implies that 'S would do x if S were to intend to do x' and that (2) 'S ought to intend to do x' implies that 'S would intend to do x if S were to intend to intend to do x', you would have to hold one of the following implausible views: (a) whenever S ought to do x it is not the case that S ought also to intend to do x, (b) it is never the case that S ought to do x, or (c) it is sometimes the case that S ought to do x but that's only because S can have an infinite series of intentions.
Posted by: Douglas W. Portmore | August 24, 2013 at 09:37 AM
Hi Doug
I agree that with regard to physical actions most accept ought implies can, and that with regard to desires the majority do not think ought implies can. For example, it would be common to say ‘Ed ought not to have those paedophilic desires but he cannot help it.’ Not sure about beliefs and intentions though – I think ordinary people see them as more easily influenced than desires…
As for what the correct account is, well as I said before, I cannot see the point of saying to someone ‘You ought to X’ if he cannot X. It is like saying to the dyslexic boy, ‘You ought not to make spelling mistakes.’ I think the aim of normative ethics is to help one act better…
Posted by: Dan Dennis | August 25, 2013 at 06:15 PM