Some Of Our Books

Categories

« Conference on Parfit | Main | Parfit Addendum »

July 27, 2006

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I think what is supposed to be doing the work in G3 against the Nazi example is the 'relevantly like them'. The idea probably is that when we imagine ourselves to the position of others we would have to adopt a large chunk of their beliefs, values, desires and so on. A version of an actual Jew person who held the Nazi world-view would then not count as relevantly similar position to be adopted. The Nazi who wanted to test his actions with the Golden Rule would then have to adopt the Jew's position with the beliefs and values that come with it. Of course from that perspective the evil deeds could not be rationally chosen and thus the actions have to count as wrong. My small worry is that G3 is such a revised version of the Basic Golden rule that it's hard to see how it is the same principle.

Perhaps you're right, Jussi, but then there is real difficulty in making clear sense of the phrase, "What if I were an X?" In order for me to imagine myself as someone else, there has to be enough of myself retained to gain some sort of imaginative purchase on the moral conclusions to draw for me. If the person into whose shoes I'm stepping is very different from me and I'm somehow to imagine being relevantly like him, then unless I retain some of my core psychological elements in the imaginative leap, I have no more clue as to what I should do than if I did no imaginative projection at all. I was assuming, then, that among the core psychological elements retained by the Nazi in the imaginative leap would be his beliefs about the nature of Jews, perhaps projecting himself into what he took to be a "clearheaded" version of the specific Jewish person in question.

I share your worries about the relation of G3 to the Golden Rule, though. I've been having similar thoughts about the relation of the contractualist formulat Parfit winds up with to Kant (or Kantianism).

Isn't the problem in the Nazi Case that he's not doing what he thinks that he’s doing because he has a false belief? What he intends to be doing is killing beings that are evil and subhuman, but what he is in fact doing is killing beings that are neither evil nor subhuman. So, as Josh would say, it all depends on what the relevant description of the act is. Even if we could rationally choose that we ourselves be killed if we were evil and subhuman, G3 doesn't imply that what the Nazi did wasn't wrong.

At the end of this chapter, Parfit notes that "some people have come to believe that Kant's Formula of Universal Law cannot help us to decide which acts are wrong, or help us to explain why these acts are wrong." As O'Neill puts it, FUL "gives either unacceptable guidance or no guidance at all." Parfit responds that once we revise FUL in the ways that he recommends it doesn't give unacceptable guidance. But what about the other horn of O'Neill's dilemma? Does it give us any practical guidance at all? Does it help us decide which act are wrong? Can anyone tell me whether, for instance, it is wrong to kill one to save five on the Kantian Contractualist Formula?

Doug: What the Nazi would rationally choose can depend on false beliefs, can't it? So he could rationally choose to kill beings he falsely believes are evil and subhuman.

As to your second question, I believe the point of the next chapter (11) is to deal with those sorts of practical guidance questions.

Dave: You write: "he could rationally choose to kill beings he falsely believes are evil and subhuman." Heck, he could rationally choose to kill beings that he knows are neither evil nor subhuman so long as it was in his self-interest to do so. But I thought the issue was whether he could rationally choose that others kill him when they falsely believe that he is evil and subhuman? I don't think that he can rationally choose that given the prevalence of false beliefs about such things.

Dave: As to the second question, there you go cheating again by reading ahead. That spoils all the fun.

Doug: Your way of framing the matter utterly undermines the very possibility of using G3 as a decision procedure or as a moral motivator, for most (if not all) of my false beliefs are beliefs I don't know are false. So how could the issue be "whether he could rationally choose that others kill him when they falsely believe that he is evil and subhuman?" That might work as a criterion of rightness, but it sure as hell won't work as a moral motivator or a decision procedure.

Instead, the Nazi can, it seems, treat Jews only in ways in which he would rationally choose that he himself would be treated, were he a Jew with the beliefs he has about the evil of Jews, regardless of whether or not those beliefs are true.

And back to Jussi: Parfit claims that what it means to imagine being relevantly like the people whom our acts affect is to imagine ourselves having their "desires, attitudes, and other physical or psychological features" (183). Beyond my earlier worries about identity, there's no explicit specification here that I would have to imagine myself with the other's nonmoral beliefs.

Dave,

Actually, I should have said, as I did above, the issue is "whether he could rationally choose that others kill him if he were (as he mistakenly thinks the Jews are) evil and subhuman?" And, as I said above, even if he could that doesn't mean that G3 implies that what the he did was permissible, for G3 would then only imply that it is permissible to kill those who are evil and subhuman and the Jews he killed are neither evil nor subhuman.

Now you express the following worry:

Your way of framing the matter utterly undermines the very possibility of using G3 as a decision procedure or as a moral motivator, for most (if not all) of my false beliefs are beliefs I don't know are false.

I don't follow. Why isn't G3, as I understand it, a good decision procedure? It tells us (or at least those of us who don't have the same false beliefs that the Nazi has) that it is wrong to kill the Jews. And it motivates me, because it gets me to see things from the other's position. For instance, I might say, "Boy, I wouldn't want to be killed if I weren't evil and subhuman." Of course, G3 doesn't help those who have the relevant false beliefs to accurately determine what's right and wrong. But what prinicple does? (Only false ones, I imagine.) And sometimes G3 will even motivate people with false beliefs to act wrongly. But isn't that true of all moral principles?

OK, I think I cede the point about motivation and decision procedures given false beliefs. There might be a few principles that escape the worries about false beliefs, but you're right that they'd be true of at least most.

But now I'm wondering whether or not your appeal to false beliefs is going to work here. What he intends to be doing (killing those who are evil and subhuman) may not be what he's in fact doing, but isn't there a distinction between what you intend to do and what you're intentionally doing? Or have I got the wrong distinction?

I have a follow-up question to one of Doug's questions. If the supreme principle of morality is supposed to "explain why these acts are wrong," then KCF (that looks distressingly like a fast food joint) should do the work of a criterion of rightness. But this kind of formula strikes me as getting further from the moral criterion (allowing that formulas of this kind might be better decision procedures). Is what makes A not punching B right that it comports with "the principles whose universal acceptance everyone could rationally will"? That seems far-fetched to me, as does any answer that appeals to counterfactual agreements. On these sorts of questions, this is where traditional consequentialism (the punch reduces welfare, etc.) or Kantianism (it disrespects humanity) seem to me to have much more intuitive answers. Or is there some way of construing KCF such that it seems more intuitive as an explanation for why wrong acts are wrong, particularly those that where contracts are irrelevant (e.g., ordinary violence)?

Dave,

You ask, "What he intends to be doing (killing those who are evil and subhuman) may not be what he's in fact doing, but isn't there a distinction between what you intend to do and what you're intentionally doing?"

Yes, there is this distinction. As I understand it, the distinction is such that what you intentionally do is broader than what you intend to do. In the Trolley Case, you may not intend, but only foresee, the killing of the one on the side track, but, nevertheless, killing the one on the side track is one of your 'intentional doings'. In any case, I don't see the relevance here since the killing of the Jews was both intended and an intentional doing of the Nazi.

Note that I'm not appealing to false beliefs in formulating G3. The false belief is only employed to explain why the Nazi incorrectly believes that G3 implies that it is permissible to kill the Jews. In fact, G3 implies that it is impermissible to do so regardless what one's (true or false) beliefs are, for we could not rationally choose for others to kill us when we are not either evil or subhuman -- at least, not in the sort of circumstances in the case at hand.

Josh,

Good point. It seems that if an explanation of why certain acts are wrong is supposed to tell us what makes these acts wrong, then KCF fails here as well. This same objection, if I recall correctly, came up when we read Scanlon's book. And I believe you wrote an excellent post extending this sort of objection to all moralities by authority. See Josh's Are Deontology, Consequentialism, and Pluralism the only viable theories of ethics?.

Josh (and Doug): One possible reply is that KCF doesn't have to be a criterion of wrongness; rather, it's merely a characterization of wrongness.

Second, even if it is a criterion of wrongness, it itself wouldn't have to be what explains first-order acts of wrongness directly; instead, those would be explained with respect to some specific principle(s).

Regarding your first point, whether KCF does or doesn't need to be a criterion of rightness depends on what Parfit wants from KCF. If he want it to explain to us why, at the most fundamental level, certain acts are right and others wrong, then it does need to be a criterion of rightness, right?

Regarding your second point, that doesn't sound like a criterion of rightness to me. A criterion of rightness, I thought, was supposed to tell us what the most fundamental right-making and wrong-making features of acts are. On your conception here, KCF doesn't tell us that. Rather, it is the specific principle(s) that tell us that.

Went to preview my comment, and whadya know, Doug wrote almost the exact same thing! Just one follow-up point. Even if Parfit doesn't want fundamental principles to provide a criterion of wrongness, it seems reasonable for us to want it. (Or for us to want Parfit to say that he's only searching for half of the supreme principle of morality. Or for him to explain what other sense of "explains" he has in mind.)

Josh (and Doug),
I agree with Dave. KCF can be seen as characterizing the nature of wrongness, which is different from specifying a wrong-making property or principle (Scanlon makes this distinction in What We Owe to Each Other, pp.10-1 and p. 391n.21). True, in a way the formula of humanity seems more direct. But this may be deceiving, as it is not obvious what it means to respect the humanity of someone else (KCF may help us to identify what this means in different contexts, by asking what other human beings could rationally accept).
I want to raise another question. I wonder whether the Non-Reversibility Objection applies to Kant´s FLN test only if we accept a construal of the maxims mentioned that is too specific. If we construe the maxims Parfit mentions in more general terms, or as implying more general policies (say, e.g. that you can oppress members of other, weaker groups when that would benefit you in some immediate way), then it is not obvious that the wrong acts are not reversible. Your group is dominant now, but it may turn out to be dominated later on, and you might not rationally will that you then be treated on the basis of a policy that permits opression of weaker groups...
Is this worry a fair one?

I thought the worry was that there's something too indirect in KCF to explain why it's wrong for A to punch B. But what I was trying to say in my second possible rejoinder was that the specific principles can address that ("it's wrong to cause physical harm to other people for no good reason"), and then if you want to know the more fundamental reason for that principle, you can appeal to KCF. So KCF can explain the most fundamental right-making and wrong-making features of actions are, but when it comes to the more humdrum actions we perform, the more direct, or immediate, explanation will be one (or more) of the specific principles whose universal acceptance everyone could rationally will.

Pablo: I agree with Dave too insofar as he claims that "KCF can be seen as characterizing the nature of wrongness, which is different from specifying a wrong-making property or principle." The question, for me, is whether that's enough.

Dave: If KCF is taken to be explaining what the most fundamental right-making and wrong-making features of actions are, then I think that it gets the wrong answer. What makes A gratuitously punching B wrong is not that it fails to comport with "the principles whose universal acceptance everyone could rationally will," but that it gratuitously causes harm. After all, it could turn out that, on correct substantive account of rationality, A's gratuitously punching B does comport with the principles whose universal acceptance everyone could rationally will. It could not, however, turn out that gratuitously causing harm is permissible.

Dave: The argument I just gave is probably not the best. Even if A's gratuitously punching B does necessarily violate the principles whose universal acceptance everyone could rationally will, it still seems, to me anyway, to be the wrong explanation as to what fundamentally makes A's doing so wrong.

Doug: Oooh, I had just written up a reply to that argument when I saw your new comment. Now you've got the old "seems to me to be the wrong explanation" argument, so I'll respond in kind: it seems to me to be the right explanation. The principle of rightness at issue should be (it seems to me, anyway) something for which violations render one morally responsible (absent certain conditions like accidents, ignorance, insanity and the like), in the sense of being accountable. But being accountable is being accountable to others, and that surely depends on being susceptible to moral demands whose source is in the claims of those others. So again, I can agree with you that causing gratuitous harm is wrong (at least most of the time), but that will be true only insofar as the victim (or some other member of the moral community) has a legitimate claim against being gratuitously harmed.

As for the issue of whether or not a characterization of wrongness is enough here, I don't know. I'm inclined to think it's not, and Parfit's remarks throughout suggest that that's not what he's gunning for. But as the original objections seemed to have arisen in response to the move to contractualism generally, that's a legitimate response, I think, one that Scanlon himself makes (as Pablo rightly points out).

Dave,

Even within your accountability approach to wrongness, and even under a claim-based approach, I still want to say that KCF is a counter-intuitive moral criterion. It's not because others can justifiably make claims on me not to punch them that punching them is wrong; rather, it's because of the content of those justifications, namely that punching them gratuitously harms them (or some such). So (by my intuitions, anyway), someone can say "harming me was wrong" not because he has a claim against me harming him; rather, he has a claim because harming him is wrong for independent reasons. Maybe it's wrong because it reduces his welfare, or causes suffering, or treats his humanity as a mere means. But those seem like our justifications for the claims, rather than being justified by the claims. Or look at it in your terms: the claimant must have a "legitimate" claim. Whatever makes it legitimate is the criterion of rightness, the basis for the act being wrong that is more fundamental than, and explains why, a pro-harm principle is not among "the principles whose universal acceptance everyone could rationally will."

On an interpretive sideline, there are a couple of places where Scanlon sounds like he agrees, by basing rightness on the more fundamental value of humanity (e.g., p. 268). Though he also says many things that seem to run against this (e.g., the intro, among many other places).

To Pablo: I think your other worry about maxim description is indeed fair (I've gone on about similar worries so much over the last couple weeks that I should probably stop now!).


David,

you wrote:

'And back to Jussi: Parfit claims that what it means to imagine being relevantly like the people whom our acts affect is to imagine ourselves having their "desires, attitudes, and other physical or psychological features" (183). Beyond my earlier worries about identity, there's no explicit specification here that I would have to imagine myself with the other's nonmoral beliefs.'

I don't think the Nazi world view can be purely non-moral beliefs. Repulsive ideas like that Jews would be inferior and evil sound more like evaluative and moral beliefs. Thus, they would be on the side of attitudes (which beliefs too are) and other psychological features.

Josh, you wrote:

So (by my intuitions, anyway), someone can say "harming me was wrong" not because he has a claim against me harming him; rather, he has a claim because harming him is wrong for independent reasons. Maybe it's wrong because it reduces his welfare, or causes suffering, or treats his humanity as a mere means. But those seem like our justifications for the claims, rather than being justified by the claims. Or look at it in your terms: the claimant must have a "legitimate" claim. Whatever makes it legitimate is the criterion of rightness, the basis for the act being wrong that is more fundamental than, and explains why, a pro-harm principle is not among "the principles whose universal acceptance everyone could rationally will."

What's interesting about what you say is that each of the justifications you give -- reducing welfare, causing suffering, treating humanity as a mere means -- sounds legitimate to me, despite appealing to very different criteria of wrongness. So what might possibly unite them all into a single fundamental criterion of rightness? Simple: their universal acceptance is something everyone could rationally will.

(BTW, "'legitimate' claims" was just shorthand for "claims referencing principles whose universal acceptance everyone could rationally will.")

Jussi: It depends. Is the belief that some entity is non-human, or even sub-human, necessarily a moral belief? I've wondered explicitly about this issue when considering Susan Wolf's article "Sanity and the Metaphysics of Responsibility," in which she argues that Nazis were partially normatively insane, and so partially non-responsible for their actions. Normative insanity is defined, roughly, as the inability to recognize moral reality, whereas cognitive insanity is defined as the the inability to recognize nonmoral reality. But it's unclear whether or not the inability to recognize that there are no relevant psychological differences, say, between oneself-qua-Nazi and Jews is an example of normative or cognitive insanity, given those construals.

There's two ways to go here I think. First, 'sub-human' could be a thick term which, if it would apply, would imply negative evaluations of the object and normative conclusions about what to do to things in that category. In this sense, it is of course an empty term. But, having beliefs that something would satisfy the term would count as a moral belief because of the evaluative/normative content of the belief. Second, it could be a purely descriptive term in which case no evaluative or normative implications existed for what to do to beings in that class purely in virtue of that belief. In this case, the Nazi would need some (false) evaluative/normative beliefs about what ought to be done to subhumans if there were such in purely descriptive sense. Either way, the Jew would lack the normative/evaluative beliefs.

Dave: You write: "What's interesting about what you say is that each of the justifications you give -- reducing welfare, causing suffering, treating humanity as a mere means -- sounds legitimate to me, despite appealing to very different criteria of wrongness."

What are your grounds for asserting that these three appeal to "different criteria of wrongness"? Perhaps, they each amount to the same thing. It seems to me that the three all appeal to the very same criterion: namely, they all entail producing disvalue without producing any compensating value. Reducing welfare is bad, causing suffering is bad, and treating humanity as a mere means is also bad.

My own inclination would've been to say that those things are bad only insofar as they violate principles to which everyone would reasonably consent. What makes reducing my welfare (without compensating value) bad? It's something to which the reasonable me would never consent. What makes reducing my welfare with a certain level of compensatory value permissible? It's something to which the reasonable me might consent. (I'm borrowing a Scanlon-esque formulation here, and I'm putting things roughly, but I'm sure you get the idea.)

Jussi: Two things. First, I don't understand what you mean when you say, "having beliefs that something would satisfy the term [on the thick understanding of "subhuman"] would count as a moral belief because of the evaluative/normative content of the belief." Why is my belief that X meets the condition of some evaluative category itself an evaluative belief?

Second, if the term is purely descriptive, why think the Jew wouldn't share the Nazi's evaluative belief about what ought to be done to subhumans generally? He'd just think he himself is not one.

Dave, you wrote

What's interesting about what you say is that each of the justifications you give -- reducing welfare, causing suffering, treating humanity as a mere means -- sounds legitimate to me, despite appealing to very different criteria of wrongness. So what might possibly unite them all into a single fundamental criterion of rightness? Simple: their universal acceptance is something everyone could rationally will.

I find Doug's answer here kind of intriguing, but I had something different in mind. I only mean to say that those are each plausible candidates for the fundamental criterion, whereas the contractualist principle is not. I (surprise, surprise) believe that some version of the formula of humanity supplies the moral criterion, in which case principles that prohibit reducing welfare and causing suffering are merely subsidiary to that fundamental principle.

But that's not really the issue at hand, which is whether the contractualist principle is also plausible candidate. If it's supposed to be fundamental, then there's no reason why we might or might not consent to any given practice or sub-principle. Contractualists like to emphasize that the consent must be reasonable or rational as a way of avoiding that problem, but that to me just pushes back the problem. Either it's reasonable (or rational) by virtue of some more basic principle (e.g., I can refuse consent because it wrongfully causes gratuitous harm) or not. If the former, (unadulterated) contractualism is false; if the latter, it's too permissive.

As you put it in your latest reply to Doug,

My own inclination would've been to say that those things are bad only insofar as they violate principles to which everyone would reasonably consent. What makes reducing my welfare (without compensating value) bad? It's something to which the reasonable me would never consent.

At the risk of sounding like a broken record, this is what seems to me to be the counterintuitive claim: rather than saying that reducing my welfare is bad because I can't reasonably consent to it, my intuition is to say that I can't reasonably consent to it because it's bad. If that's not what makes my withholding consent reasonable, then what does? And, for whatever answer is given, why isn't the content of that answer itself the criterion?


David,

First, I thought evaluative beliefs just were beliefs the contents of which is that the object satisfies some evaluative criteria. Thus the belief that 'X is a subhuman' on the thick understanding would be just the thought that X satisfies such-and-such evaluative criteria.

Second, well, have many people, other than Nazis, have thought that anything that is not quite a human on some descriptive criteria should be tortured, poisened, and burned? I think most everyone shares attitudes of more humaine treatment for not-quite humans if there were such. Nazies were the ultimate speciesists with a very deranged idea of the species.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


Disclaimer

  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.