Some Of Our Books


« To be bold or cautious? | Main | Specificationism »

September 04, 2007


Feed You can follow this conversation by subscribing to the comment feed for this post.

Hi, interesting post. I have myself wondered about utilitarians and zombies...but I wonder at your intuition that there are no moral qualities in the zombie world. Zombies can promise can't they? I mean they have all the intentional states (minus qualitative character) if a zombie breaks a promise then they have done somethign immoral, no?

That's good. I do have the intuition about the Zombie world. Maybe it is just me. About promises. I would say that, yes, they can make promises but I'm not sure this implies that when they break the promises they have made to one another they act wrongly.

The wrongness of promises does not only depend on the agents but also on the promisees. I can for instance make promises to trees or my computer. My intuition is that I don't act wrongly when I break them. Similarly, if I made a promise to a Zombie, I'm not sure breaking it would be wrong. It's not that the Zombie would mind, not even that it could mind in principle. Of course, the Zombie would go through the motions and act as if it minded. But, there would be nothing that it would be like for it to have the promise broken. If this goes for the promises I would make the Zombies, then it seems to also go for the Zombie-Zombie cases. Have to say that I'm not 100% certain about these intuitions.

I am very uncertain about the sources of these intuitions. One thing that comes to mind is that the Zombies lack self-consciousness. We probably can attribute to them higher-order functional states - beliefs about their own states. But, it would be odd to think about the Zombies making choices or decisions based upon reflection. Reflection seems to require a stronger sort of self-consciousness. And, I'm starting to wonder if that sort of reflection is a necessary requirement for acting wrongly or rightly. Even acting in ways that have thick qualities seem to require the possibility of being conscious of certain considerations being salient for one's actions.

I have some similar intuitions, though I would not put a lot of weight on them. Still, I think part of what drives them is that I'm not sure the zombies are _acting_ at all. I want to say, their motions are all for causes, but not for reasons. However, I'm also aware that my intuitive metaphysics here is not necessarily compelling. Still, there you have it.

I share most of your intuitions about the zombie moral world, Jussi. The Kantian in me takes from this that morality is principally about how agents relate to one another (the reasons they act upon and so forth) rather than the realization of natural states of affairs. As Heath suggests, zombies 'act' (in the sense that their bodies move in ways that are phenomenally indistinguishable from how we actual moral agents act) but are not thereby moral agents.

Thanks for the response Jussi,

Do you really think that you can succsessfully make a promise to a tree? If you take a Searlian kind of speech act theory seriously it turns out that you can't.

So if zombies can really make promises then they really have obligations and when they are broken the act is wrong...or so it seems to would then want to rule out that the zombies were really making promises, in the way that (I think) Heath and Michael responded.

One way to do that might be to argue that the zombies couldn't really use the categorical imperative since it involves consciously 'seeing' or 'realizing' that a certain maxim can't be universalized...they wouldn't then count as moral agents because they cannot have their will determined by the categorical imperative.

Of course one problem with this is that some people think that most of the decisions that we make are actually done unconsciously and that we confabulate a story that has us acting for reasons. If so, we might be zombies!


Here's a worry. If you use 'moral properties' broadly, to include values as well as properties attributed in act appraisals like 'being wrong' and 'being permissible' and 'being right', then Zombie worlds do have moral properties. They would have moral properties in virtue of having value. For instance, Zombie worlds would have beautiful sunsets, life, and culture. On the other hand, if you restrict 'moral properties' to include only properties attributed in act appraisals, then the thesis loses a bit of interest since it's plausible that acts are right or wrong only when they are performed from certain reasons. If they are performed from reasons, then it's plausible that they are consciously performed and this would, not surprisingly, be excluded in a Zombie world.

One may opt for a reductionist picture of value, a picture according to which the value of beauty, life and culture consists in one's having certain desires (satisfied) towards sunsets, life and culture. So, then the idea would be that since Zombies do not have desires, there is no value in a Zombie world. This latter view, however, has its own problems. Desires do not need to be conscious, as we attribute desires towards things that we're not inclined to attribute consciousness towards. Moreover, this reductionist picture sits uneasily with the anti-reductionist view that motivates a Zombie world. In the form of a question: Why should we be anti-reductionists about consciousness but not about value?

As for myself, the two anti-reductionist views each seem just as conceivable as one another (not conceivable at all), but the alleged asymmetry would need to be defended or else the possibility of irreducible value will make Zombie worlds rich in moral properties.

Christian: I think those who suppose that desires "do not need to be conscious" don't thereby think they're necessarily not conscious. I.e., it's a condition of someone's having a desire that it must be possible to bring that desire into consciousness. Given the descriptions of zombies, the only 'desires' they could have would necessarily not be conscious, hence not desires.

Thanks everyone for interesting comments. I'm not too sure about this all but here's few thoughts.

Heath and Michael,

I'm thinking about the Zombie-Jussi in the Zombie world currently typing these very words (would be cool to swap places...). It's easy to describe him as wanting to make a point and believing that by writing these very words he can do so. I think I may be with the Humeans that such desire-belief pairs are necessary and sufficient for the Zombie-Jussi's motions to count as actions.

Of course one can do two things here. First, one could say that the desires and beliefs one is inclined to attribute to him are not *real* desires and beliefs - they would have to be conscious. I think Galen Strawson takes this line. Thus, you would not get actions. Or, you might deny that the Humean theory of actions is right.

I like Michael's proposal that Zombies would not count as agents in a rich enough sense for moral appraisal. You could accept that Zombies do act but they are still not agents. Robots seem to fall into this category.


I think I might want to distinguish between three things:
1. Promising something to x
2. Making a successful promise to x
3. Making a binding promise to x.
I would like to say that if x is trees you can only get 1 but not 2 or 3. I wonder if zombies too would be capable of 1 but not 2 and 3. Or, they could be able to do 1 and 2 but not three. This would be the class to which coerced promises could fall. You can successfully promise to a tortured but I don't think there is any obligation created.

I'm not sure I'd like to go the Kantian way. This is actually quite interesting. Just for the reason you mention, I've always been attracted to the idea that the criteria for a good will in Kantian views should be understood as a formal criteria for a form of the will. And, one's will can have the right form even when one doesn't acknowledge it. If the Zombies have a rich enough set of functional states like beliefs and desires, then it could be that they are capable of having the right form of the will. But, from this you might get an argument that Kant's criteria for moral worth of actions is not sufficient because otherwise the Zombie's actions would be good.


that is a worry. I think I would like to include some values to moral properties and not others. The Zombie world would have aesthetic values like that of the beatiful sunset, or biological values like life. I'm not sure what value culture has - probably many different values. But, I might want to deny that moral values are instantiated in that world. I'm also not sure about the other horn of the dilemma. It seems to me that accidental actions not done for any reasons can be right and wrong in our world.

I do want to grant that Zombies have desires. On the reductionist picture of value you propose, the sunset would lose its value if the Zombies were removed from the world (as it would if we were removed from our world). I'm not sure I would want to say this. I'm also sorry that I'm not sure I follow the last argument.

For what I know, the reductivist projects in both areas of philosophy often share same motivations. But, I take it that whether one should be a reductivist about some discourse depends on the plausibility of the reductive analyses one can come up with. Maybe one could find such for value concepts but not for phenomenal concepts. This could be due to asymmetric interrelations like that you need phenomenal concepts to analyse value contepts.

Whether or not the asymmetry is defensible, I don't think that the mere possibility of irreducible value properties can make the Zombie world rich in moral qualities. One would also have it that things in the Zombie world have the sui generis moral qualities.


So you're suggesting that Zombies couldn't have desires? I put that as a question since I'm not sure. But let's suppose that Zombies couldn't have desires. I'm suggesting that a Zombie world would still have value since I don't think the existence of desire is required for the existence of value. More precisely, I'm suggesting that the claim that a Zombie world must lack moral properties is true only if, when 'moral properties' are understood broadly, a reductionist view of value is assumed. But then the question is this: Why should we be reductionists about value, while at ther same time, anti-reductionists about consciousness.

I'm also unsure why the view that it must be possible to bring a state S into consciousness is necessary for S to be a desire. But suppose that's right. If so, it's then arguable that Zombies "would have desires" since our duplicates are only contingently Zombies, and not necessarily Zombies, otherwise Zombies would only be 'Zombies' in name.


"It seems to me that accidental actions not done for any reasons can be right and wrong in our world."

A couple of things. I don't think we have intuitions about the way things actually are. Intuitions must be about possibilities and necessities (though, of course, correct intutions about necessities entail facts about the actual world). More importantly, I'd like to hear more about this claim. It seems to me that an action performed from no reason at all can neither be right or wrong. A challenge: Find a better explanation for why animals cannot do right or wrong than the explanation that they do not act from reasons.

"But, I take it that whether one should be a reductivist about some discourse depends on the plausibility of the reductive analyses one can come up with."

Yes. My point is that the conceivability intuitions seem to me to be symmetric between the cases. A Zombie world is no easier to conceive of than a duplicate world of ours without value. So, if we are to be reductionist about one, and not the other, we need a reason to think there is such a reductive analysis in the offing. What might that be?

"I don't think that the mere possibility of irreducible value properties can make the Zombie world rich in moral qualities. One would also have it that things in the Zombie world have the sui generis moral qualities."

The values needn't be sui generis. The idea is that we can subtract consciousness from our world considering a duplicate of it, i.e. a Zombie world. Next, whatever values exist in our world that do not depend upon consciousness will carry over to the Zombie world. Since our world is rich in value that doesn't depend upon consciousness, so does the Zombie world. You may restrict value to moral value, though I do not think there is a difference. But then my first worry arises again. If we restrict 'moral properties' to exclude interesting properties, then the conclusion of your argument is not at all surprising.


good. This is helpful. Couple of thoughts on couple of thoughts.

1. I'm surprised that you think that we don't have intuitions about how things actually are morally speaking. Most people have a lot of intuitions about whether attacking Iraq for instance was right.

When I'm driving if I'm not careful I might miss spotting the pedestrian crossing the street and run her over. That is an action of mine that I don't do for any reason. Still wrong, right? A reply to the challenge is that animals lack what Wallace would call general normative capacities of responding to reasons. They could not even have acting for a reason. Agents can. Whether they do or do not in particular cases does not seem to be necessary.

Of course, it can be that acting for a reason is necessary for responsibility, blameworthiness, and praiseworthiness of the right and wrong actions. So, Arpaly for instance thinks that praiseworthiness requires that the act was motivated by concerns that track real good reasons. But, you can have wrong actions for which you are not blameworthy.

2. It might be that this is true for you "A Zombie world is no easier to conceive of than a duplicate world of ours without value". I seem to be able to conceive a Zombie world but not a world that lacks value but has all the same physical and phenomenal properties. I wonder how wide-spread these intuitions are. Some of the others above seemed to be on my side.

3. Well, many people think that philosophy should not lead to surprising conclusions but rather leave things as it is. I like the idea of thinking about the thought experiment as extracting consciousness from our world and seeing what evaluative and normative properties are left. I find it interesting if any evaluative or normative properties disappear as they seem to in case of some moral properties. That would seem to imply that the naturalist, in the sense of physicalist, semantic views about some of the central moral and evaluative concepts could not hold. I would be surprised if the argument (which I'm not sure I would want to defend) have even this conclusion. But, I'm easily surprised.

Hey Jussi,

"Most people have a lot of intuitions about whether attacking Iraq for instance was right."

I'd distinguish between derived claims and basic claims and restrict intuitions only to basic claims. Derived claims depend upon basic intuitions as well as premises about the empirical facts; as would it be with the claim that "attacking Iraq was right". There are subtleties, but that's the idea.

"When I'm driving if I'm not careful I might miss spotting the pedestrian crossing the street and run her over. That is an action of mine that I don't do for any reason."

I'm missing something, but I think you have a reason to be careful.

I don't know what Wallace's or Arpaly's view is, but they seem to be getting at what I am. Animals don't act wrongly because they don't act from, respond to, or possess reasons.

"I seem to be able to conceive a Zombie world but not a world that lacks value but has all the same physical and phenomenal properties."

I can't conceive of either:)


you wrote that:

"It seems to me that an action performed from no reason at all can neither be right or wrong."

The case was supposed to be a counter-example to this. It doesn't really matter that there is a reason to be careful because that isn't a reason you acted on.

In the Iraq case, or other actual cases, do you think that people really go through an inference to end up with the moral intuition? I would have thought that people can be sensitive directly and non-inferentially to such events and their moral features.

I know many say that they cannot conceive Zombies. Occasionally I feel that way too. That's one reason to be suspicious of the sort of arguments we are discussing.


I see. Now I understand the example. Someone is driving and they kill somebody through negligence and you want to describe the person's action as being done for no reason though wrong.

I want to say that the action is done for a reason, stipulation aside. People don't drive for no reason. But if you really want to make the driving involuntary, like a cough, then I say his act isn't wrong.

There is an interesting question whether there could be an involuntary act, an act done for no reason, that should have been done for some reason. I need to think about this.

I do think there are inferences involved, though they need not be conscious. The literature on heuristics and biases talks alot about this. Here is another worry though: intuition gives us knowledge, and when it does, it's a priori. But, if intuitions were really about the actual world, then we would have a way to have a priori knowledge about which world is actual. But that's impossible.

I used to date a Zombie.


I worry that you are switching actions. Sure, the action of driving the car is done for a reason. Trouble is that this seems to be a different action than killing the pedestrian. That the other action is done for a reason doesn't seem to make the other action to be done for a reason. Doing something voluntarily also does not necessarily require doing something for a reason. I often just whistle for no reason at all. It's still voluntary.

I share your second worry. The problem is that since Kripke, Kaplan and others people do take a priori contingent knowledge seriously. Dancy thinks moral knowledge falls into this category in his new book. The main argument for this possibility is to refer to partners in crime. Ridge and McKeever have good commentary on that in their book.


I think the contingent a priori examples are dubious, but fair enough, it's not obvious. That was just a side argument, and there are all sorts of reasons to think intuitions can't be about what is actually the case. But I suppose this is something of a digression.

You say you whistle for no reason at all, while I think you whistle for reasons you are unaware of. I don't think the relevant action is killing the pedestrian, but that it is a consequence of your action. Moreover, I think 'driving the car' counts only as a performed action derivatively, as being an apt description of more fine-grained actions that explain it. But, again, it's not obvious how to individuate actions and I don't want my point, that all actions evaluable for rightness or wrongness, must be actions performed from some reason, to hang on this.

I'm open though. I just can't think of an action I would want to describe as wrong, that wasn't also performed from a bad reason.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.