Some Of Our Books

Categories

« New Contributor | Main | Call for Papers: BSET 2007 »

July 14, 2006

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

That sounds about right from my experiences with intro students. Were the set of subjects that were given the 'morally good' case the same set of subjects that were given the 'morally bad' case? If so, what was the order? The good case first or the bad case first? Or were there two sets of subjects, where one set was given the 'morally bad' case and the other set was given the 'morally good' case. If the latter, then it might be interesting to see what happens when you give the same set of subjects both the morally good case and the morally bad case and see whether the results are the same -- that is, see whether they sense any inconsistency in calling the morally good case an instance of letting die and the morally bad case an instance of killing. If your working hypothesis is correct, then the subjects shouldn't have any feeling of inconsistency in calling the morally good case an instance of letting die and the morally bad case an instance of killing, right? Or do I misunderstand your working hypothesis? My experience with students is that they do sense an inconsistency when presented both cases one after the other.

Josh,
That fits my intuitions about what's going on. Except this: Did you separate out causal judgments in general from human beings and their actions as causation? It might be that moral judgments only figure in the latter.

Whether or not it is a plausible view is underdetermined by the kinds of questionnaires you again seem to be using. I guess this too goes back to Antti's paper but also to lot of Grice's work.

We can have two explanations for why the subjects are more willing to say in the bad case that the patient was killed and death was caused and less willing in the good case. First is that truth-conditions of terms like causation and killing are morally loaded like you assume. Second is that while the truth-conditions of these terms, killing and causing death, are morally neutral - it is in both cases equally true that person is killed and death caused - these terms carry on implicatures about blame. If that was true the implicature would explain why the subjects are not willing to apply the term even though it would be true even by the criteria they use. This is because they do not want to imply in the good case that someone deserves blame. As far as these two explanations go, the data is neutral. In order to show that the asymmetry is to be explained by semantic meaning of these terms and not pragmatics, more has to be done.

To rule out a pretty natural nonmoral explanation, one presumably needs to control for a nonmoral asymmetry: in the "morally good" case, the doctor's action is caused by the patient's wishes, but in the "morally bad" case it is not.

Thanks so much for all of these comments! The ideas you've come up with so far have been incredibly helpful, and we'd love to hear any further thoughts you may have.

Doug,

In our study, each subject only got one of the two questions, but it would be extremely interesting to see whether subjects would see any kind of inconsistency if they received the two questions back-to-back.

So far, I don't think that anyone has tried that sort of approach with doing/allowing questions. But Shaun Nichols and Joe Ulatowski did try something along those lines with a question about intentional action -- giving people two cases back-to-back that were extremely similar in almost all respects but differed in that one was morally good and the other morally bad. To everyone's surprise, subjects happily said that the morally good behavior was unintentionally and the morally bad one was intentional, showing no sense that there was anything inconsistent about these two answers.

Robert,

I think it would be very interesting to see whether normative beliefs also affected people's intuitions about causation by inanimate objects. For example, suppose that a computer produces some strange sort of output, and we are looking for a causal explanation. Will we be inclined to say that the causal factor is that aspect of the computer which is, in some normative sense, 'incorrect'?

Jussi,

I would love to know whether this effect is due to something about people's fundamental concepts or whether it is due to conversational pragmatics. Do you have any suggestions about how we might figure this out?

Dave,

Yes, this is definitely a problem, and Walter and I are now working on a new version in which the agent's behavior is in no way affected by the patient's wishes.

In the meantime, though, I wonder if your worry can be addressed simply by conducting further statistical analyses. In our study, all subjects were asked whether the doctor's action was morally wrong, and multiple regression shows that people's moral judgments mediated the effect of condition on causal judgment. (For example, those subjects who thought that the agent in our 'morally good' vignette was actually doing something morally wrong tended to say that this agent did cause the death.) Does this further analysis help to allay any of your concerns?

Joshua,

that of course is a tricky question. I guess I would start from the current philosophical debates concerning doing and allowing, killing and letting die, and causation. All the philosophers working in this areas I have met have seemed like competent users of these terms, and even though they disagree about the truth-conditions of these notions, I have nothing but trust in them as capable philosophers who are on their way to more and more accurate analyses of the relevant concepts. Many of them debate about whether there is moral difference between killing and letting die, but as far as I know no-one has suggested that these terms themselves have morally loaded truth-conditions (which would make the moral difference they debate trivial). This would make me willing to go for the pragmatist account about the asymmetry in application of the terms. I really wouldn't want to say on light grounds to say that they all have made a complete mess in their analyses. The worry of course is that these philosophers are no longer talking about the folk concepts. But, I'm not really convinced about this. I've seen some of these philosophers having heated discussions with the 'folk' in which both sides seem to be understanding themselves to be involved in substantial disagreements which assume shared concepts. I'm also sure that to suggest to them that they have changed the subject would make them to point out to something like what Timmons called 'conceptual chauvinism'.

I would also reflect my own intuitions about whether it is false about the good case that the patient was killed and true in the bad case, and why I might hesitate in the former case to say so. If my concepts of killing and letting die radically differed from those of others, I'm quite sure I would have noticed this already in ordinary everyday communication. I could also take part in socratic questioning with the subjects. Ask whether they realise the asymmetry in their use, whether they think there is something odd about it, and why wouldn't they say in the good case that the patient was killed. I could also ask whether they would say that the agent was killed if we acknowledged that the doctor does not deserve blame (conversational implicatures are cancellable in this way - conventional unfortunately not).

Joshua,

Your experiment here is related to the one that I ran earlier. Here is my current hypothesis on some of the issues here: people have both what Dowe calls the genuinst intution (that omissions are genuine cases of causation) and the intuition of difference (that omissions are not causes in the same way that postive acts/events are). But people's thoughts about causal responsiblity are often influenced by their thinking about moral responsibility. Given that the two so often go together, the folk don't always differentiate the two in morally loaded cases. Particularly given the earlier work suggesting that people's attributions of moral responsiblity are influenced by greater degrees of affect, I would think that the higher the level of affect (as in the death of a patient, particularly one who doesn't want to die), I would think that your experiment may provide further evidence here. I'm planning on running a modified version of my previous experiment this fall to see if the causal responsiblity/moral responsibility link varies across high vs. low affect cases.

Kevin,

This sounds like a promising suggestion, but I'd love to hear a little bit more about it. At least on the surface, it certainly looks like there aren't any omissions in the cases used in the experiment. In both cases, the doctor is not just omitting to perform a behavior but actively unplugging the machine. (And this makes it appear that the doing/allowing distinction isn't just a distinction between omissions and actual behaviors.) Are you suggesting that there might actually be some deeper sense in which there really are omissions involved in these cases?

Jussi,

You are certainly right to point out that people overwhelmingly believe that moral considerations play no role in the concept of causation, but it seems conceivable that people are simply mistaken about the nature of their own concept here. As we look at people's intuitions regarding particular cases and try to develop a theory that accurately explains them, we may end up coming to the conclusion that moral considerations actually do play a role in the concept and that people are simply confused about the nature of the distinctions that they themselves employ.

In this particular instance, there does seem to be at least some plausible case to be made for the claim that people are mistaken about their own concept. In particular, it seems that people's views might be distorted by an education in the sciences. That is, they might notice that moral considerations play no role in the sorts of causal explanations that appear in the sciences and then conclude that moral considerations don't play any role in the ordinary concept of causation. But there is good reason to suspect that the ordinary concept of causation diverges in certain ways from the concepts we find in the systematic sciences. After all, our aims in understanding the world are not purely scientific ones. We are moral animals through and through, and much of our concern with people's behavior is fundamentally wrapped up with moral issues.

I certainly agree that it is a possibility that people could mistaken in trying to provide an analysis of the term which they have the know-how to apply. Probably most of us are. The question though is how big of a mistakes we want to attribute to people. There are reasons why attributing the biggest kind of errors should be the last resort. First sort of reasons come from the idea of Davidsonian principle of charity. As long as we are trying to understand people, we need to assume that they are by and large rational and have mostly true beliefs. If accepting the traditional, non-moral intuitive analysis of causation (which those who have reflected on these matters a lot uniformly accept) and a pragmatic explanation for the asymmetry in application in this particular case allows us to regard others us not making any mistakes really, then we should accept that picture.

In addition, if we for some reason do not want to accept that view, we still can locate the mistake from two places. We can either say that every one has made a massive mistake in trying to capture the norms that guide their use of 'causation', or we can say that some non-reflective people, even a majority, are misapplying their term in odd moral thought-experiments in the questionnaires. Again, if we want to be charitable to people, and I think we should, the latter option makes them more rational - guilty of a smaller and more local mistake than the former option. For this reason, again, I would think of the moral truth-conditions for causal claims as the last resort to explain the data.

Very interesting. I have two thoughts about all this:

1. In my view no sensible defender of the doing/ allowing distinction thinks that every case is either clearly a doing or clearly an allowing; nor that every case that is clearly a doing or clearly an allowing has its moral character fully (or sometimes even at all) determined by that fact. Like most other important moral distinctions in applied ethics, the doing/ allowing distinction has a ceteris paribus status only.
My own standard example for this point is rather like Walter and Joshua's case in their questionnaire. It's the life support machine which can be designed two ways, one so that you have to press a button to switch it off, the other so that you have to press a button (say, every day or so?) to keep it running. Does it make a huge moral difference which way the machine is designed, just because not touching the button leads to death on one design, whereas touching it leads to death on the other? I don't really think so myself, and not because I don't believe in the doing/ allowing distinction. (Although interestingly, when I made this point at the Joint Session in Southampton last week, Jo Wolff, who was in my audience, chipped in that in Israel they design life-support machines in precisely this way, so that Orthodox Jewish hospital staff can avoid doing killings rather than allowing them.)
2. I've written a bit myself on people's tendency to infer, mistakenly as I think, that if X is morally responsible for E's happening then he must be causally responsible for E's happening. The ambiguity in "responsibility" here is misleading. For often, I believe, the opposite is true: when we're culpable for omissions, usually what we're culpable for is precisely not that we were causally involved (in some bad event), but that we weren't causally involved (in something better that could have happened instead). If anyone's interested, I say this in "Two distinctions that do make a difference", Philosophy 2002; it's also on my webpage at http://www.open.ac.uk/Arts/philos/aodpde.wpd.doc.

Josh,
You should see whether your hypothesis about the computer is right. That seems like an experiment worth doing. But for my part, it will take quite a lot for me to think that our concept of causation include some normtive bits. If it does, I'm happy with an error theory.

Still, I find the upshot you've already discovered extremely interesting. For one thing, some Kantians (not me) think of agent causation as essentially normative, in the sense that you didn't fully cause an action unless you were in some sense motivated morally. Your experiment seems to show that this is not the way we think about agent causation. We think precisely the opposite way. You caused it when you were motivated badly.

Augustine actually works this into is explicit view, but he takes it much further. He doesn't think we're praiseworthy for anything we do, since we do them only because God allows us not to be sinful in those instances. But then of course we're blameworthy for the wrong we do, since God isn't doing anything to us to enable that.

I wonder how much of this is still remnants of that sort of view at work among the population. It would be interesting to see if religious, particularly Christian-influenced populations show more of this sort of thing than other populations.

Thanks so much for all these comments. They are really extraordinarily helpful.

I notice that a number of people are suggesting hypotheses that invoke two distinct factors. In these hypotheses, there is a core concept of doing/allowing that does not involve moral considerations in any way, but then there is also some additional factor (pragmatics in Jussi's hypothesis, confusion in Tim's) that allows moral considerations to affect people's actual responses.

I would certainly be open to this sort of view -- but only if we had some independent way of getting at the content of the core concept. For example, if moral considerations do not really play any role in the fundamental doing/allowing distinction, then it must be the case either that both of the cases we present are truly cases of doing or that both of the cases we present are truly cases of allowing. But is there any way of figuring out which of these two options is correct? I mean, is there any way to know whether (a) the core concept treats both cases as allowing but then some additional factor leads us to classify the morally bad case as doing or (b) the core concept treats both cases as doing but then some additional factor leads us to classify the morally good case as allowing?

One way of looking at the issue is to start from something like the principle of compositionality. On this view, whatever contributions terms of 'doing' and 'allowing' make to the truth-conditions of sentences, these contributions must be uniform in different sentences. Thus, if these terms are morally loaded, this should show up generally. Now, your hypothesis seemed to be that part of the concept of causing or doing is that it not an instance of morally good event. But, now, take typical morally good events that look like acts; keeping of promises, saving lives, providing food for the starving, and so on. Under your hypothesis about the 'folk concept' this 'acts' folk would hesitate to classify or classify less as instances of doings and causing things. But, I think I know a priori that everyone would count these as doings. So, in general, I think you won't get any evidence, other than the odd euthanasia case, that doing and causing are morally loaded terms in the way you think. So, your hypothesis would lead to the idea that folk are misapplying the term vastly more generally than the hypothesis that something else is going on in your though-experiment.

I'm not sure I see the point of the last question. The core concept, whatever it is, like most other concepts is likely to be a vague concept so that it's rules of application do not straight-forwardly extentd to classify all cases. Your thought-experiment is a really hard case, not one our linguistic community has comes across every day in the doing/allowing talk. Thus, it might be that the concept hasn't been sharpened for that case. It may be that under one description both cases count as doing and under one both as allowing. Why would this matter?

I started to write a longish comment, but then I thought to myself, hey, I don't belong to this blog, but I do belong to another one.... So I posted it there:

http://experimentalphilosophy.typepad.com/experimental_philosophy/2006/07/sinnottarmstron.html

It seems to me that there are some possible questions that might be asked that might affect the outcome on the importance of the intentions motivating an action.
1) What determines that the 'morally bad' is morally bad, or the 'morally good' is morally good? Is it bad because in the morally bad one the patient wants to live (or die in the 'good' one) or because the doctor hates him (or is following the patient's wishes in the good one)?
2) Why do we think that the doctor hating the patient is motivating him to turn off the machine. It is stipulated that this is the motivation, but is it not possible for someone to hate a person and still do the right thing re that person?
3) What would happen if a third question was asked where you simply 'know' that the patient is terminal, needs the machine to stay alive, and the doctor either turns off the machine or keeps it going? We do not know anything about the patients desires or the doctors feelings towards the patient. Is one of these action morally good and the other morally wrong? Or is it indeterminate? If it is indeterminate what affect does this have on the importance of correctly locating the relevant intentions?
4) Whose intentions (motivations) are we morally assessing, the patents or the doctors, or both?
5) What about a scenerio where the patient wants to die and that because the doctor hates him he does not turn off the machine. Why is the doctor not turning off the maching bad in this scenerio

I ask these questions because if we locate the relevent intention in the patient's desire to live or die and that is what determines the goodness or badness of an action then we do not need an experiment to determine what we should do. All the experiment can do is determine if the subjects have the correct 'intuition' of what we should do. But the correctness of what should be done, and why, is determined independently of the experiment.


I still am not clear on what experimental philosophy is trying to achieve. I am finding the experiments and debates very interesting (I am learning a great deal), but I am left with a Moorean open question, 'but is it philosophically interesting that subjects react as they do?'

Thanks for making me think.
John Alexander

Joshua:
In my earlier comments on your research I find that I have misstated my concern on the philosophical interest of your endeavors. I did not mean to imply that what you are doing is not philosophically interesting. I think exactly the opposite. Thanks in large part to my son’s interest in experimental philosophy and discussing some of the issues with him, I have come to think that it provides a very rich field for investigation, and possible resolutions or reformulating of, classical problems. The question that I have, and this is what I was trying to convey, is: what do you take to be the philosophical implications of your study? The fact that a group believes x and that this belief affects some of their other beliefs is important information and may have important implications, but how does their having a belief about x pertain to whether their belief (intuition) is a correct, or false, one? The claim that a significant number of people think that the doctor who hated the patient caused the death of the patient whereas the other doctor’s actions did not is very interesting, but how does this relate to the question ‘but are they correct in thinking that the doctor caused the death of the patient in one scenario, but not the other?’

I apologize if I offended; I did not do so intentionally.

John Alexander

So I thought I would make this a family affair and offer a few comments about what the results that have been achieved by experimental philosophers can tell us. Standard philosophical practice has for a long time been the practice of appealing to intuitions as evidence for or against philosophical claims. According to standard philosophical practice, philosophical claims are judged prima facie good to the extent that they accord with our intuitions and prima facie bad to the extent that they fail to accord with our intuitions.

If intuitions are supposed to be able to be used as evidence for or against philosophical claims, then we should be interested in studying the nature of the relevant intuitions and their suitability to serve as evidence. Enter experimental philosophy.

So what can experimental philosophy do for us?

One thing it might do for us is to provide us a means for determining precisely what the intuitions are that are held by both philosophers and the folk. This is extremely important for those who want to continue the practice of employing intuitions as evidence. If we are going to continue to rely on intuitions as evidence, we need to be able to determine what the intuitions are. The research programs conducted by experimental philosophers can deliver the evidence: the intuitions that can serve as evidential basis for or against philosophical claims. A great deal of really interesting research is presently being conducted by experimental philosophers in order to begin to determine what the relevant intuitions are.

Another thing experimental philosophical can do for us is to provide evidence that calls into question the practice of appealing to intuitions as evidence by calling into question the suitability of intuitions to function as evidence. Recent results in experimental philosophy have shown that intuitions are subject to systematic variation and instability. The Weinberg, Nichols, and Stich and the Machery, Mallon, Nichols, and Stich results demonstrate that intuitions systematically vary between cultural (and socioeconomic) groups. But, if intuitions systematically vary, then it is possible to use intuitions as evidence for divergent (or, worse, for contradictory) philosophical claims. The upshot is that if we are to maintain standard philosophical practice, then we will have to admit that philosophical claims are relativistic, find a way to preference the intuitions of one group over another, or admit that intuitions that demonstrate variability can be used as evidence. The Swain, Alexander, and Weinberg results demonstrate that intuitions generated in response to one thought-experiment can vary according to whether, and which, other thought-experiments were considered first. But if this is so, then intuitions are unstable. And, this makes it unlikely that there would be any fixed set of intuitions about a particular thought-experiment that can be appealed to as evidence. And, the Nichols and Knobe results demonstrate that the presence or absence of affective content in certain thought-experiments can influence subjects’ intuitions. This data reinforces the claim that intuitions are unstable.
So, these are two different philosophical uses to which the results of experimental philosophy can be put. Now, you ask a very interesting question: can the results of experimental philosophy be put to use in helping to determine whether the intuition that some of a person or group possesses is correct? Let’s suppose that experimental philosophy can determine that person A has the intuition that p. And, suppose that we wonder “is person A’s intuition that p correct or incorrect?” I take it that a similar question can be raised regarding perception. Suppose that we determine that person B has the perception that q. The analogous question is “is person B’s perception that q correct or incorrect?” The skeptical worry, I take it, is that unless we can provide an answer to either question, we are not in a position to use either intuition or perception as evidence. The skeptical worry about perception is that our perceptual seemings could remain the same (I am appeared to red-ly and square-ly) even were the facts radically different (I am a brain-in-a-vat and there is no red square before me). Similarly, the skeptical worry about intuition is that our intuitions could remain the same (it seems to me that knowledge is not simply justified true belief) even were the facts different (knowledge actually is simply justified true belief). So, can determining that some person or group has the intuition that p can help resolve the skeptical worry about intuition? I think not. Just as determining that person B’s perception is that p doesn’t resolve the skeptical worry that person B’s perception might not be correct, so determining that person A’s intuition is that p doesn’t resolve the skeptical worry that person A’s intuition is mistaken. But, I am inclined to think that this is not a problem. Even if the results of experimental philosophy don’t go much distance towards helping philosophers resolve the skeptic’s worry; they do go quite some distance towards reforming the standard philosophical practice of appealing to intuitions as evidence.

An interesting question that I will leave for discussion is: what kind of evidence would allow us to determine if person A’s intuition that p is or is not correct?


Thank you for your response. As usual (proud father speaking) it is concise and clearly written.

A couple of comments:

The ‘holy grail’ in philosophy regarding knowledge is that knowledge must be 1) certain and 2) unchanging (Plato), or indubitable (Descartes). This concept of knowledge is open to the claims of skepticism. The way to avoid skepticism is to recast knowledge in a more pragmatic context where knowledge does not require certainly, only a high degree of probability, and can be defeasible. Science, and its methodology, is the paradigm. This approach can be traced back to the Sophists, up through Bacon and Hume, to the logical positivists and pragmatists and their intellectual decedents. This approach is not subject to skepticism, but claims that knowledge, following James, is something “we can assimilate, validate, corroborate and verify” and may be falsified based on the results of further testing.

It seems to me that experimental philosophy best fits into the second approach. This is confirmed by the results that Joshua A comments on relative to what has been achieved in experimental philosophy as well as his analogy between intuitions and perceptions. I have suggested that experimental philosophy needs some non-circular justification for appealing to intuitions as evidence. I think that the analogy provides an indication of such grounding. If we maintain that perception is a natural phenomenon, which it must be in so far as we need, and possess, certain naturally occurring specialized equipment to have these types of experiences, then we can maintain, if the analogy holds, that intuitions are also natural phenomena that can ultimately be understood by developing a well worked out epistemological naturalism. Intuitions would then be seen as natural data, just like sense data, that would verify certain beliefs as being warranted.

I think it was Otto Neurath who suggested that what is needed to resolve (eliminate) epistemological issues is a well worked out sociology of knowledge, or how knowledge is gained through the various processes of socialization that a person goes through. This seems to be what experimental philosophers are in fact doing. If it can be demonstrated through experiments that persons of a certain group have certain core beliefs, or intuitions, that define that group then we can ascertain whether certain other beliefs are consistent with these core beliefs. The criteria, or evidence, for accepting intuitions become their pragmatic usefulness in developing conceptual frameworks. Intuitions then are neither true nor false in the classical sense of agreeing with reality, but rather they provide the consistency and coherence needed to support a workable, unfolding ‘web of belief’ that makes sense of the world by ordering our experiences and is not contradicted by any of our experiences.

John Alexander

Doug
Then, I believe, the rescue case will not work either. You have self-interested reasons for saving Bob although there is no altruistic difference between saving Bob and Abe. It seems to me that, given what you state above, you need two conditions fulfilled to judge the rationality or irrationality of an action. Doing y is not irrational ony if 1) there is no self-interested reason to do x over y and 2) from an altruistic standpoint x and y are of equal value. The 1st condtion is not met in the rescue case although the 2nd is. Your intuition that it is not irrational to save Abe would only hold if you focused all the explanitory and justificatory weight on the 2nd condition and disregarded the 1st, but that would be counter-intuitive given your stated criteria that to be irrational there has to be a self-interested reason to do one over the other.

Or am I missing your point entirely?

Thanks for your response. I am enjoying and benefitting from this exchange.

Sorry for posting in the wrong discussion. Sometimes I am dumber then a bag of dirt.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


Disclaimer

  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.