Some Of Our Books


« Why Most Moral Theories Are Deficient | Main | Hooker on the Cost of Internalizing a Moral Code »

June 18, 2004


Feed You can follow this conversation by subscribing to the comment feed for this post.

Can we read your paper on this somehow?

Right now I'm transitioning between jobs, so I don't have anywhere to post papers on-line at the moment. Hopefully at some point soon I'll be setting up a new webpage. In the meantime, I'd be happy to email you a copy - email me if that works for you.

Example first, then a suggestion.

It's World War II. During a bloody battle for a city, say Stalingrad, a macabre if by now all too familiar ceremony is taking place: five members of a Nazi firing squad are about to execute five prisoners, innocent civilians all. The only thing they are waiting for is a sign from a man in an officer's uniform. I'm a sniper, and as it happens, I've got the officer's heart in my sights - fortunately, since my gun is fixed in place so that I can't change the aim. This "officer", incidentally, is not really German at all, but a Russian civilian who put on the uniform just to keep from getting cold. Though he's nothing like a friend, I know him, and I know that he believes that by raising his arm, he will stop the soldiers from shooting, while in fact that is precisely the sign they're waiting for. I know the Germans' aim is steady and their guns are working, since I just saw them execute five other prisoners. I also know that without authorization, they simply will not shoot but instead report to company headquarters for further instructions, during which time I will be able to release the prisoners and take them to safety. I see that the "officer" begins to lift his hand. Should I pull the trigger?

In any real life case - if this counts as one - there are epistemological uncertainties involved, but let's waive them aside here. Is the agency of Agents 2 to 6 (the soldiers) neutralized? I don't see that. A better description of the case might go like this: the choices that Agents 2-6 have made necessitate that they will each x unless y happens. They've made a conditional decision (read this compatibilistically or incompatibilistically), indeed formed a conditional intention. The paradox is that I can bring about y and thus push them to another path (that they've chosen), but only by x-ing myself. How would this hamper their agency? They could have decided to fire when the moon comes out, instead. Would the moon staying behind the clouds remove their responsibility for not firing - or its coming out for their firing?

I hope I understood your point correctly. I would, by the way, like to have your paper by mail as well - I'm presenting my own solution in a couple of weeks at a conference.


No, I don't think the agency of the soldiers is neutralized in a case like this. Rather, it's the kind of case I was imagining in the last long paragraph of my original post, where Agents 2-6 are, in fact, willing. So far, I think we agree.

What I want to say, at this point, though, is that there is no paradox in this kind of case. The reason is that deontology doesn't recommend (in this case) more rather than fewer violations of the duty not to kill. For, it recommends that none of the six Agents kill anyone. Those duties were violated, in turn, not when the trigger is actually pulled, but at what you've identified as the decision-making time, i.e., when Agents 2-6 set up the dilemma (the firing squad) in the first place. Of course, either Agent 1 or Agents 2-6 will have to violate their duties not to kill, but the point of wrongdoing lies not in the actual pulling of the trigger, but, rather, in the "pre-murder," i.e., the setting up of this situation to begin with. This is what allows deontology to say that none of 1-6 should kill.

Proponents of the paradox, though, need a case where deontology says something else, namely that Agent 1 should not kill, but Agents 2-6 therefore must kill, *and* that the wrongness of the duty violation(s) enters in *exactly* when those killings take place. So, in this kind of case, I don't think we've got a paradox-generating scenario. I could be talked out of it - I'm still not sure on this one - but that's the worry, anyway.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.