The Safe: Imagine that the time is T-0 and that the fate of the world depends on your opening a safe by T-11. Suppose that the correct ten-digit combination is 205-513-9437. And suppose that although this ten-digit combination is unknown to you, you could dial it in the following sense. If you were to intend at T-1 to dial a ‘2’ for the first digit of the combination, you would then succeed in doing so. And if you were subsequently to intend at T-2 to dial a ‘0’ for the second digit of the combination, you would then succeed in doing that. And so on and so forth for the remaining eight digits. Thus, there is a sequence of actions, each of which you are capable of performing, that would result in your opening the safe by T-11. Let me stipulate, though, that given your ignorance of the correct combination you would in fact fail to open the safe by T-11 no matter what you intend at T-0 to do.*Critics argue that, despite this last fact, OC implies that you are (as of T-0) morally required to dial 205-513-9437 and thereby open the safe by T-11, for this is, of all your alternatives, the only one that would maximize the good. They conclude, then, that OC absurdly implies that you are required to dial the correct combination despite your ignorance of the correct combination. Moreover, Wiland argues that if you’re required to dial the correct combination in this case, then you’re required, on most every other occasion, to type out and publish the next supremely important document (perhaps, one describing the cure for cancer), for typing out and publishing this document would maximize the good. And since almost all of us regularly fail to type out and publish the next supremely important document, almost all of us are acting impermissibly virtually all the time.
But let’s consider more carefully the critics’ assumption that, in The Safe, your dialing the correct combination is one of the alternatives from which OC tells you to select. Whether it is, we’ll see, depends on what we take the relevant alternatives to be. Of course, it’s clear that, in The Safe, the relevant alternatives are sequences of actions — e.g., sequences of number-dialing acts. But the critics go further and assume that the relevant alternative sequences are those that are, as of T-0, personally possible for you. The consequentialist could, however, reject this assumption and claim instead that the relevant alternative sequences are those that are, as of T-0, securable by you. In so doing, the consequentialist could avoid the absurd implication that you are required, in The Safe, to dial the correct combination as well as the absurd implication that you are required, on most every other occasion, to type out and publish the next supremely important document.
To illustrate the difference between what’s securable by an agent and what’s personally possible for that agent, consider the following. Suppose that I’m on a low-fat, low-carbohydrate diet. It’s now 2 p.m., and I’m thinking that I should grill up some asparagus and a lean boneless, skinless chicken breast for tonight’s dinner. If this is going to be what I’m having for dinner, I’ll need to get into my car, drive to the grocery store, buy the necessary ingredients, drive back home, marinate the chicken, prep the asparagus, fire up the grill, wait for it to heat up, grill the chicken and asparagus, plate them, and eat them. But suppose that, as a matter of fact, no matter what, at 2 p.m., I plan or intend on doing later, I will not end up eating grilled chicken and asparagus for tonight’s dinner. For, as a matter fact, I’m going to eat pizza instead. What’s going to happen is this: while in the midst of preparing the chicken and asparagus, I’m going to get very hungry. And, in this state, I’m going to be overcome by the temptation to go off my diet and will at 5 p.m. throw the chicken and asparagus back into the fridge, turn off the grill, and order pizza. It’s not that I couldn’t grill and eat the chicken and asparagus. Indeed, I could in that, were I to intend at 5 p.m. to continue with my plan to grill and eat chicken and asparagus, that’s exactly what I would do. It’s just that, as a matter of fact, I will at 5 p.m. abandon my plan and order pizza instead.
In this case, my eating chicken and asparagus at 6 p.m. is, as of 2 p.m., personally possible for me, but it is not something that is, as of 2 p.m., securable by me. It’s personally possible for me, for there is a series of steps that I could take that culminate in my eating grilled chicken and asparagus at 6 p.m. tonight, and the following is true of each of those steps: having taken the previous step or steps, I could then take the next step in that, were I to intend to take the next step, I would succeed in doing so. Nevertheless, my eating chicken and asparagus at 6 p.m. is not, as of 2 p.m., securable by me, for, as we’re supposing, no matter what I intend to do now at 2 p.m., I will not end up eating grilled chicken and asparagus at 6 p.m. Even if I were, at 2 p.m., to form the most resolute intention to stick with my plan, it’s a fact about my psychology that I would abandon it in the face of the ensuing hunger and the ever-increasing temptation to eat pizza. Thus, there’s absolutely nothing that I can intend to do now that will result in my eating chicken and asparagus at 6 p.m. — or so we’re assuming.
Now, in The Safe, your opening the safe is, as of T-0, personally possible for you, but it is not, as of T-0, securable by you [I added the second 'as of T-0' at 5:05 AM]. And the fact that it is absurd to suppose that you are, as of T-0, morally required to open the safe by T-11 despite the fact that, no matter what you intend at T-0 to do, you will fail to open the safe by T-11 is not a reason to reject OC, but rather a reason to identify the relevant alternatives as being those that are securable as opposed to personally possible. Once we combine OC with the idea that the relevant alternatives are all and only those that are securable by the agent, we avoid absurd implications in cases such as The Safe.
*Here and elsewhere, I’m assuming that, for each set of actions that S might perform, there is some determinate fact as to what the world would be like were S to perform that set of actions. And I’m also assuming that, for each set of intentions that S might form, there is some determinate fact as to what the world would be like were S to form those intentions. This assumption is sometimes called counterfactual determinism — see, e.g., BYKVIST 2003. I’m assuming, then, that even if you were to pick the correct ten-digit number at random and intend to dial it and so intend at T-0 to dial 205-513-9437, you would still fail to dial that number, for you would, say, half-way through change your mind and decide to dial a ‘6’ instead of ‘3’.
Doug - thanks for this discussion, now I think I understand the difference between what is "securable" and what is "personally possible". Tell me if I I got this wrong:
For it to be the case that dialing 205-513-9437 is securable by me at T1, it must be the case at T1 that: there is some intention I could form (e.g. the intention to dial 205-513-9437) such that if I form it, then I will in fact dial 205-513-9347.
For it to be merely personally possible at T1 for me to dial the number 205-513-9437, it only has to be the case that at T1: if I form a relevant inention at T1 (e.g. to dial 2), then I will in fact dial 2, and if I form a relevant intention at T2, then I will in fact dial 0, and if I form a relevant intention at T3, then I will in fact dial 5, and so on.
But now I don't see how these claims seriously threaten Wiland's sort of objection. Can't he just say that OC now obligates you to write the first letter of the next Great American Novel, then it will obligate you to write the second letter of the next Great American Novel, then it will... and so on. Isn't this set of claims about obligation just as implausible as the claim that it (now) obligates you to write next Great American Novel? (Indeed, couldn't we plausibly interpret the latter, if we are worried about INTUITION, as just a bit of shorthand for the former?)
Posted by: Simon Rippon | May 12, 2010 at 01:43 PM
Zimon,
You write:
Can't he just say that OC now obligates you to write the first letter of the next Great American Novel[?]
He could say that, but, depending on what letter that is, it would probably be a false statement. Suppose that in the above quote 'now' refers to the moment in time at which I typed the letter 'Z' above. Suppose that, as it happens, the next Great American Novel begins with the letter 'Z'. Further suppose that you are offended by my spelling your name 'Zimon' rather than 'Simon' and that had I used a different example that didn't involve typing 'Z' for the first letter of this comment, more good would have actually obtained. In that case, it's false that I was, according to OC, obligated, as of that time, to type 'Z' above. Note that, according to OC, whether I'm obligated to type a letter depends on the consequences of my doing so. Further note that my doing so doesn't have the good consequence of my following that action with a sequence of keystrokes that results in the production of the next Great American Novel (I hope that this comment illustrates at least that much). And it does, we're supposing, have the bad consequence of offending you. So, according OC, I was in fact obligated not to type the first letter of the next Great American Novel.
Of course, you may want to say that, having typed 'Z', I was then obligated to type the next letter of the next Great American Novel. But, again, whether I was obligated to do that depended on what the actual consequences of my doing so would have been in closest possible world to the actual one, not on what the consequences of my doing would have been in the far off possible world in which I follow that second letter with the next 80,000 or so keystrokes required to produce the next Great American Novel.
Posted by: Doug Portmore | May 12, 2010 at 02:14 PM
Only the closest possible world is relevant?
But what about this case: At T1 Abe is walking past a pond with a drowning child in it; he could jump in now and save her life, but if he does jump in, he'll in fact (in the closest possible world) realize then at T2 that the child has ginger hair, and because of Abe's dislike of ginger haired people (he's a real jerk), he'll intentionally get out of the pond and let the child drown. Moreover, he'll have wet shoes.
Do you want OC to say that Abe is therefore not obligated at T1 to jump in and save the child, nor even to take the first step, of jumping in? (Suppose that Abe can make no resolution or intention at T1 to save the child that is immune from being second-guessed by his choice at T2. But he can at T1 secure his jumping in, and if he does jump in, Abe will be able to secure either saving the child or not doing so at T2. It's just that he'll in fact choose not save her.)
Doesn't this view make our obligations too easy to escape?
Posted by: Simon Rippon | May 12, 2010 at 07:36 PM
Simon,
Is there no intention (no plan, no resolution, nothing) such that, were Abe to form that intention at T1, Abe would save the child at T2? Could he not perhaps form, at T1, an efficacious intention to save the child no matter what color his or her hair?
If you answer 'no' to both questions, then I'm committed to saying that Abe is not, as of T1, obligated to save the child. After all, there's no plan that he could initiate, at T1, that would ensure that he saves the child at T2. This is not to say that he's not blameworthy. Presumably, he must have done something wrong to have ended up as a person who cannot bring himself to save gingered haired children. But if that's really the way he is now, then he has to work within the limits of his current moral imperfections.
In any case, this, like some of the discussion above, concerns whether one has actualist or possibilist intuitions. The possibilist has to bite some difficult bullets. I find the bullet that I have to bite quite palatable as compared to the possibilist's very unsavory bullets.
Posted by: Doug Portmore | May 12, 2010 at 08:11 PM
Doug, I never understood the problem of memory/difficulty of typing to be the problem that Howard-Snyder and Wiland's examples were getting at. So attaching the "securable" qualification to one's original intentions seems completely orthogonal to their main points. If you're trying to distinguish between, say, someone who could make *one* intention and thereby complete the activity as a guaranteed result of this intention, and someone who needs additional (contingent) intentions to do so, this fails; because we're all in the first condition for most activities. You ask:
do you think that it's likely that there is any intention that I could form now such that, were I to form this intention now, I would type out the next King Lear?
But I doubt that Shakespeare ever had such an intention which was sufficient to perform the job; he needed others that occurred to him along the way. So would I. Now the *probability* of my being able to form such appropriate sequential intentions is much lower than that of Shakespeare doing so at some point in time, or Don Delillo, Toni Morrison, etc. But it is *possible* (and it is also possible that such masters of the craft would fail; it's a matter of degree, though a huge one).
I take the Howard-Snyder/Wiland cases to show (1) that the differences between logical possibility and what we might call morally practical possibility are so vast, much much vaster than what we see in more mundane examples like pressing a red or green button to stop a bomb, that to make moral attributes a function of the former instead of the latter is more palpably absurd than might previously have been thought, (2) even if I were to press the right button, win the chess game, etc. by making random motions and hitting on the solution by luck, only the motor actions can be considered truly mine, and not the relevant success; the latter is not something *I* do as an agent, but merely something my body actions contribute to in conjunction with highly fortuitous circumstances, and hence not relevant to moral judgment of *my* actions, which are now more clearly seen to be based on intentional responses to the evidence which I have and can assess about my situation, which *must* include awareness of and appropriate responses to my cognitive limitations, (3) [Wiland's addition] that this gulf appears not just on rare occasions, but constantly and massively.
Now, that said, you can evoke a theory by which you *call* anyone who fails to maximize the logically possible good (copossible with their current physical state, and notably including vastly improbable psychological possibilities) "wrong." But this now appears to be (1) a palpable abuse of the English language, and (2) a palpable abuse of language-independent moral concepts. If anything is true, we are not all morally wrong for failing to do these things at nearly every moment [this shows we are *imperfect*, but to conflate imperfection with wrongness now appears more vividly to be a salient fault of objective consequentialism]. My inability to come up with with the next great novel or scientific document is not based on the fact that I couldn't form a single intention to put it all down at once, or that I would have some irresistible compulsion to mistype a crucial character or section part-way through. It's that the subjectively-known chances of my successfully completing it, while non-zero, are so low that I would now be wrong to form and act on any intentions to try to produce such documents instead of doing other things with far higher subjective expected value. For forming and acting on intentions in response to accessible evidence is what agents actually do; whether they produce good results, write novels, or even succeed in pressing buttons, is strictly speaking out of their control. They can only reach in certain directions with certain hopes and expectations; to judge them right or wrong based on the results is, strictly speaking, to confuse qualities of the resulting states of affairs with qualities of their actions. Now this is something that deontologists have long accused consequentialists of doing; and I think they were right. What's interesting is that a consequentialist response to this criticism is possible: it is subjective consequentialism.
You concede above that most of us are at most moments of the day not doing what we are, as of that moment, objectively morally required to do. But then you basically agree w/ Howard-Snyder and Wiland about the facts on the ground, so to speak; you just want to use different words than what they want to use. The question then is: is it more, or less confusing, to say that Caesar was equally wrong not to have advanced human civilization by inventing the steam engine as to not have done so by restoring the republic, than to say what normal people would say about this case. I think it is deeply misleading to say such things; we would constantly have to be qualifying the term "wrong" using objective and subjective parameters. We could also say a dog has five legs, four regular ones and a fifth "tail leg." But why on earth should we do this, when both ordinary language maps better onto important real distinctions? We can say all you want to say about "objective wrongness" by using words like "actually bad results." But this has *nothing* to do with moral qualities of human action /except/ insofar as evidence for, anticipation of, etc. such results formed some part of some agent's intentional response to some facts.
Posted by: Scott Forschler | May 20, 2010 at 08:51 AM