Some Of Our Books

Categories

« Happy Fifth Birthday! | Main | Intuitions about Bare Normative Modal Statements »

May 29, 2009

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Josh, the empirical data indicate that the author of Moral Dimensions is not Timothy, but Thomas (often called 'Tim').

Oh no! I've now corrected that.

I'm not sure that I see that many of these vignettes are very relevant for whether Scanlon's view is right or not. First, Scanlon's view is about what is wrong and blameworthy and not what people think is wrong and blameworthy. Part of the argument probably is revisionary - people don't always make the judgments they should be making.

Second, if I remember this right, much of the first chapters in Scanlon's book recognises that are intuitions of the wrongness of actions co-vary with the information about agents' mental states. After all, this is one of the lessons of the DDE intuitions that are discussed in the first chapter. However, Scanlon attempts to argue that the relevant mental states themselves are often indicative of certain external considerations that are the real wrong-makers. In the cases above, these for instance are about taking appropriate precautions when carrying out risky activities. Whether one takes such actions is not necessarily matter of being in a given mental states even if the mental states indicate whether the appropriate steps have been taken.

Finally, at least the vignettes above seem to really underdescribed for assessing blameworthiness or wrongness.

I haven't read much of Scanlon's book (yet!). But I think Jussi has a good point that Scanlon's approach to this subject is a bit different from most when it comes to its relation to the empirical data. Since Scanlon is so sympathetic to the judgments in the cases that seem to support DDE, his approach does seem quite revisionary. So long as he provides in the book a good error theory or other good reasons to think we're making mistakes along these lines, he should be in the clear.

I don't know whether his account in the book can serve as a good error theory. But I figure evaluation of his view in relation to the empirical data should be directed there. Alas, I'm not in a position to contribute in that way since I haven't read the book! :)

However, I'm not sure I agree with Jussi that these data aren't "very relevant for whether Scanlon's view is right or not." If we're evaluating his view as an error theory of results such as these, it is important for his view that these are the results to be explained. And even if he is mostly right about what we say about these cases, some of the details of the results might make a difference as to whether his error theory is plausible. For example, maybe his conception of what he's attempting to explain away is slightly different from what's been borne out in the experiments. In that case, it could turn out that the error theory he's offering explains his phenomenon well but not the actual one. So I think the data can still be relevant even if his view is revisionary.

This is interesting, but a few things about the studies seemed odd. (I need to give this all a careful read.)

Subjects were asked "How wrong was [agent]’s behavior?" My students often initially speak as if there's not much difference between saying that someone's behavior is wrong and saying that it's immoral where it's clear that they think there's an attitudinal component to the judgments about immoral behavior. It doesn't take long to convince them that there's this other notion of things that shouldn't be done for moral reasons that they think comes apart from this. When they start to speak this way, they don't speak as if they are correcting something they've said but talking about something else. If the subjects were using 'wrong' to pick out something other than 'shouldn't have been done due to moral reasons', I take it that this is perfectly consistent with Scanlon's view.

In discussion of Experiment 1, the authors wrote, "Furthermore, in cases where the agent accidentally
produced harm there was a significantly greater enhancement of blame than wrongness, as compared to cases where harm was neither intended nor occurred. This suggests that intention is not a necessary component for blame attribution—
people will blame an agent (albeit slightly) for unintended consequences, and do so more readily than they will declare the same agent to have acted wrongly."

As for the blame judgments, how do we know that the subjects aren't using 'blame' as something like 'cause'? (The degree of blame determines something like degree of causal contribution.) It seems subjects may well be Wouldn't it have been better to frame the question in terms of blameworthiness?

I love it when the folk agree with me. Of course, when they don't they are wrong... :-)

Josh,

I'm not sure he needs to say that most of our intuitions about these cases are mistaken. I think the project is more to explain that the pattern of judgments is explained by something else than the mental states of the assessed agents. So, in the sense the project is accommodating rather than revisionary. This avoids the sort of worry that Jon has.

About the relevance. I guess I would like to see either an experimental result of Scanlon's own thought-experiments that gives results which don't fit his view or an explanation of why the intuitions in the Cushman studies are incompatible with his framework. Neither of these has been done yet.

Thanks for all these helpful comments. I think they really help to get at the key issues here.

The original suggestion from Scanlon was that judgments of blame directly rely on information about the agent's mental states but that judgments of wrongness only make use of mental state information in a kind of derivative sense. I can definitely see how this would allow one to explain away certain patterns of data. For example, suppose that mental state information ended up explaining, say, 15% of the variance in people's wrongness judgments. I could ask: 'Why is that happening? Why doesn't mental state information just have no effect at all?' And someone could reply: 'Well, mental state information is not directly relevant, but it is relevant in a derivative sense, since it helps us learn about the external considerations that are the true wrongness-makers.'

But that is not how the data actually came out. Instead, the data show that mental states actually have *more* of an impact on wrongness judgments than they do on blame judgments. Indeed, mental state information serves as the main determinant of people's wrongness judgments, accounting for a full 83% of the variance.

Now, it does strike me as at least a little bit difficult to explain this sort of pattern of data on the assumption that people really take mental state information to be directly relevant to blame judgments but not to wrongness judgments. At the very least, the results seem to move one closer to a position like the one Clayton suggests above...

Josh was kind enough to let me know about this thread, and I want to thank everybody for the insightful comments. Whether the psychological data is philosophically informative remains to be seen, but the philosophical discussion is certainly valuable to this psychologist, at least!

I wanted to briefly weigh on a few of the issues that have been raised. First of all, Josh is right to emphasize that mental states are *overwhelmingly* determinative of subjects' wrongness judgments in my studies. The question "how wrong was behavior X" seems to reduce very nearly to the question "how much did the agent foresee and desire harm as a consequence of X?", at least for the 8 scenario contexts I tested. But, judgments of deserved blame showed a significantly greater sensitivity to outcomes (while still being *mostly* determined by mental states). What explains these effects?

Clayton Littlejohn draws attention the fact that the agent's *behavior* was the focus of the wrongness question. I tend to agree that this is playing a large role in making mental states determinative of wrongness judgments in my study. When we choose to behave in a particular way, we are forced to adopt a prospective stance that depends on mental states alone: what do I want to happen, and what do I think will happen? Thus, when evaluating the behavior of others, the strategy of asking "how would I have behaved?" would lead one to exclusively consider the other's mental states, as opposed to any accidental outcomes of that behavior. The hypothesis that we evaluate others' behavior by simulating what we would do is a focus of my current research. I would predict that the question "how much blame does Jenny deserve **for her behavior**" would be judged more or less identically to "how wrong was Jenny's behavior".

Clayton also offered that subjects might be interpreting the question "how much blame does Jenny deserve" as being about her causal role, rather than moral responsibility. Josh wisely suggested reading just the first experiment of what was admittedly a long and dense paper of mine -- to cut to the chase, in the general discussion I also suggested that blame judgments were probably biased by the causal interpretation of the word blame. However, Expt. 2 shows that *punishment* judgments are strongly influenced by accidental outcomes (not just mental states). This finding became the a focal point for the rest of my paper. And, exploring why punishment judgments show greater sensitivity to outcomes than wrongness judgments is a topic of current research for me.

I want to conclude by suggesting a way in which Scanlon's proposal and my data might be compatible, but I should warn the reader that my knowledge of Scanlon's view is limited to what I've heard from Josh and others -- I haven't yet read his book myself (but certainly look forward to doing so). The take-home message from the experiments reported in my empirical study is: there are two different processes by which we make moral judgments, one that assesses causal responsibility for bad outcomes and another that condemns behavior on the basis of "bad" mental states and exonerates behavior on the basis of "good" mental states. A prediction of this model is that different judgments that fall broadly within the moral domain will sometimes show strikingly different sensitivity to outcome vs. mental state information. My data and Scanlon's arguments might both be viewed as symptoms of this general dissociation between causal and mental state analyses in moral judgment.

Joshua,

I still don't get this. Why is Scanlon's view incompatible with the discovery that mental state information determines people's wrongness judgments fully and directly? This would still be true if people used the information about the mental states to make judgments about the real wrong-making considerations in the situation. After all, it has been accepted that the mental states are the best indication for whether those considerations are present. Did the studies ask whether what makes the actions wrong are the mental states of the agents?

Fiery, Josh, I wonder how robust people's judgments (intuitions, if you will) about these cases are. That is, would they change their verdicts if they were made aware of the implications of the underlying principles. I charitably suspect that they would. Suppose they were asked about some entirely innocent, non-risky action - Joy turns on the TV when coming home after work - that has some bad consequence (because of a minor earthquake that took place while Joy was away, the wiring is messed up and the neighbour's dog is electrocuted). If they really thought blameworthiness hangs on the consequences, they should be willing to blame Joy. And if they weren't so inclined, they might rethink what what they thought about Jenny as well (provided she has taken appropriate caution, which is not clear in the vignettes). In general, I think the evidential value of folk surface intuitions as regards their subject matter is close to nil, which is why I think it should be standard practice in all x-phi studies to test for robustness, even in this sort of minimal way. (How does this sound: For philosophy, intuitions are potentially evidence about a subject matter; for psychology, they are the subject matter.)

Also, not having read Scanlon yet either, I can see a pretty straightforward way in which mental states make a difference to the moral status of the behaviour. The same bit of externally described behaviour will constitute different actions, depending on the agent's mental states. I fire a gun on a shooting range, miss the target, and hit an assistant in the head. If that was my plan all along, my behaviour can be variously be described as aiming at the assistant, firing at the assistant, or even murdering the assistant. Under these descriptions, it will surely be wrong. If I tried to shoot at the target, didn't even know he was there, had no grudge toward him, and had never missed the target before, these descriptions would be false, though the action could still be described as shooting the assistant (and less misleadingly, accidentally shooting the assistant).

Now, action individuation will naturally depend on the consequences as well as mental states. (If the assistant doesn't die, I can't have killed him.) If the moral status of the action hangs on what action-type it belongs to, then you'd precisely expect both mental states and consequences to influence it. (It would be nice if the action-types in question were defined without moral terms - not as murder, but as intentional killing not in self-defence etc.)

This doesn't mean that there couldn't be a separate dimension of blameworthiness that depended on mental states in a different way. For example: Alex intentionally shoots a gun range assistant. What he does is wrong. Case 1: He suffers from paranoid delusions and thought the assistant was a KGB agent about to finish him. Case 2: Alex knows that his 5-year-old daughter is kidnapped by a man who will torture, rape, and murder her unless he shoots the assistant. In both of these cases, it seems Alex's mental state mitigates his blameworthiness without making a difference to the wrongness of the action. It would be surprising, to say the least, if the folk didn't recognize this, once they thought about it carefully. (Though I would expect some difference in wrongness judgments as well, in the absence of thinking carefully about a number of cases and training in consistency across cases.)

Jussi, you write that "Scanlon's view is about what is wrong and blameworthy and not what people think is wrong and blameworthy."

I'm always a bit confused by this kind of claim, since it seems to imply the existence of criteria for determining what is wrong and blameworthy that are independent of people's judgments or intuitions. What are those criteria? How, in your view, are we supposed to determine whether Scanlon's theory is correct?

Saying that doesn't mean that wrongness or blameworthiness are completely independent of people's intuitions. It could be that our intuitions reliably track what is wrong or blameworthy even if they don't make it the case that something is wrong or blameworthy. What should be the case though is that what counts are well reflected, coherent, tested, stable, informed, and so on rather than the gut feelings on the surface. I'm worried that the sort of empirical data that is discussed here is of the latter kind. As Antti points out, with a bit of reflection and Socratic questioning people might come to quite different conclusions. And, I do believe that the intuitions of philosophers like Scanlon who have reflected on these issues more deeply have more weight than the folk reactions. That's the way for us to determine whether his theory is right too. We try to think about the cases, our intuitions, other cases, the back-ground theories and try to reach a reflective equilibrium. I find this much more reliable than looking at what's the first thing people say about the peculiar cases.

I am extremely gratified by this attention to my book. I'm under a couple of pressing deadlines at the moment, but can't resist responding quickly to some of these interesting comments.(I will try later to respond to the other thread.) First, on the general methodological point. What I am mainly trying to do is to make up my own mind what to think about right and wrong, blame, etc. This does not mean that facts about what others think are irrelevant. If I learn that some or many seem to disagree with my conclusion, I need to consider why they think that and whether, in the light of what seem to be their reasons, I should think that too. But this is a matter, as I said, of making up my own mind what to think about questions of the relevant kind. There is also the question of whether we disagree because we are arriving at different conclusions using the same moral concepts, or whether we are understanding the relevant concepts (right, wrong, permissible, blameworthy, etc.) somewhat differently. One of the overarching themes of my book is that there are different dimensions of moral criticism that are not commonly distinguished, even by those of us whose profession it is to think about these things. I put forward for others' consideration a particular way of understanding some of these moral notions. So it is not surprising, or a definite counterexample to my claims, that this is not the way many people who have not thought very carefully about these things tend to see matters.

I got into this project because I had been thinking about DDE for around 30 years and was still quite conflicted about it. I wanted to figure out why, in hopes of resolving this inner disagreement. What I concluded was that I was not distinguishing between the question of permissibility that I face when I am deciding what to do and a slightly different question taht I fact when i am assessing the way in which I or some other agent decided what to do. I suspect that, like me for most of this 30 year period, most of the people in these experiments are not thinking about this distinction.

A second point is that I offer a particular interpretation of the question of moral permissibility. I point out that there is a question whether those who believe in DDE are understanding it as a criterion of permissibility in my sense, or whether they are thinking of some other basic moral category, such as that of a "good action." There is some textual evidence, I think, that the latter is so in some cases. If one takes the category "good action" as the relevant one, then the distinction I mentioned above may disappear. I call attention to this possibility, but leave it as an open question for those concerned to answer. Once one sees this possible difference, then there is the higher order question, so to speak, of what reasons there might be for taking permissibility in my sense, or good action, as the one we should attend to.

Permissibility as I understand it often does not depend on the agents reasons for acting (e.g. it does not so depend in the way that DDE proposes.) But it can depend on an agent's reasons in various ways, and one such way is that it can depend on whether the agent is taking due care.

Finally, I offer an explanation of the phenomenon of "moral luck" (or the various phenomena that go under that name--see a long end note near the end of Ch 4)--This involves explaining how blame, at least an important aspect of it, can vary, depending on the effects of what an agent did, even if these were beyond the agent's control. This phenomenon might explain some features of the data described.

Tim discussed some of his thinking on methodological questions in his last Locke Lecture which should now be up someplace on Oxford's webpage, for those interested in an elaboration of what he says above.

Jussi, thanks--your reply (along with Tim's methodological remarks) make a lot of sense. Your final claim though presents a false dichotomy.

"We try to think about the cases, our intuitions, other cases, the back-ground theories and try to reach a reflective equilibrium. I find this much more reliable than looking at what's the first thing people say about the peculiar cases.

I don't know anyone who ONLY wants to look at people's reactions to particular cases in order to arrive at reliable judgments. The point of studies like these is to present inquirers with relevant information to be incorporated into the search for reflective equilibrium. It's all just part of the process. Learning that people have different gut intuitions than we do about cases is often relevant to the process, since it can alert us to possible biases and pecularities of our own.

There's also the possiblity that sufficient reflection and dialogue would lead different people to different conclusions. If that were the case, then it seems the view would not be about what blameworthiness and wrongness are full stop. The view would be about what blameworthiness and wrongness are for Tim Scanlon and those who share his core post-reflective intuitions. You and Antti might be right that Socratic questioning and further reflection would make the people in these studies change their mind. But conjectures like these are often made without much evidence or justification.

Just to amplify Tamler's comments, maybe the more charitable way to look at these kinds of studies is as generators of data, and only indirectly--by way of some theorizing--can we generate conclusions from those data. Starting with basic data is, importantly, where we all start in the process of reflective equilibrium; it's just more robust, in a way that is sometimes relevant to our discussions, to consider a wider host of data than individual/armchair data. (For example, we often want to not just stipulate meanings, e.g. that by "moral blame" I will mean "a jammed stapler," but to engage folk discourse.)

The worry, though, is that some of it may be, and indeed is likely to be, junk data. It's not just that it doesn't reflect the facts, but it doesn't reflect the respondents' views either. That's why people like myself have tried to distinguish between different sorts of 'intuitions'. If we want to know what the folk think, surely we want to know what they really think about something, and that's something that may not be reflected in gut reactions, which may be easily tricked or mislead. If you look at Rawls, he always talks about 'considered judgments' as input to reflective equilibrium, not just any old judgments.

One reason why lot of people are dismissive about certain sorts of data is that they can, as it were, reconstruct the thought process of the respondents. They may look at the results and think something along the lines of "That's what I might think if I just looked at this case - but when I bring such and such similar cases to mind, I see that such and such feature of this case would mislead me". Again, a psychologist might be interested in why people have the first impressions they do, but I can't see why in the world a philosopher would have such an interest, or why we'd want to crunch them through reflective equilibrium.

Tamler,

that seems right to me (even if Antti's further worries are on point too). There is a sense though in which some experimentalists have talked of any considered judgments as tainted by the theory and thus as something to be set aside. This seems worrying to me at least when we talk of moral issues and not merely linguistic ones.

Antti,

I don't think we can start, as a methodological presumption rather than a conclusion based on empirical investigation, with the premise that there is one univocal thing that the folk think about any given topic, or that, if there is, it is not reflected in possible-case intuitions. The best that we can do is look at a host of data for each topic and try to make sense of it all. You dismiss "first impressions," but philosophers frequently ask us to consult our first impressions about cases as a way of helping motivate their theories. Of course, this isn't the only evidence appealed to, and it must be evaluated in light of other evidence, but we're always going to have to appeal to some intuitive judgments somewhere. (Even considered judgments require a starting point.) To show that some datum is not representative of what we "really" think, you're going to have to appeal to other data. And as soon as there is an appeal to intuition, the question of whether those judgments are all that intuitive will be open. Of course, these issues have received a lot more treatment (some by you, and some by me) than we're giving them here, but I don't think the question can be settled by saying that first impressions are irrelevant. They're part of the process of philosophical theory construction, an understanding of which requires answering three questions: when we want first impressions, whose first impressions we want, and what we can use them for.

Joshua, one thing I'm objecting to (and this is obviously more general than our discussion here) is equating intuitions (or intuitive judgments) with first impressions. I do think that judgments that don't derive from theories we're weighing in a particular context play an irreplaceable dialectical role. But I would propose the following rule of thumb: if you want intuitions to do theoretical work, make sure they can carry the weight. This goes for armchair work just as much as for questionnaires. The pragmatic difference is that in the armchair, you know that if you're going to appeal to an intuition about a case, it better be solid or you'll be laughed out the seminar room when you bring it up. When it comes to the man on the street, it appears that many people are still willing to credit any old judgment with some evidential weight, no matter how shaky it may be.

And that's my problem. I want the first impressions pass some kind of test before I'm going to take them seriously, just like I put my own first impressions to a test when I'm thinking about some subject matter (and if I don't, my peers will). Quality beats quantity any time. There is a burden of proof on anyone who claims that such and such proposition is intuitive (where that designation is meant to show that we should take it seriously in theory construction). I suggested above some ways in which the reliability of experimental studies on the topic under discussion could be increased. No doubt they are insufficient. But then again, I'm not going to lose sleep over the reliability concerns with empirical studies - the onus is on those who appeal to them.

Jussi, I'm not sure who you're referring to but I agree that tainted isn't the right word. And certainly, the considered judgments from philosophers who have developed moral theories should not not be set aside. (Out of curiosity, who has written that they should be?) On the contrary, I agree with you that they should probably be given more weight. But surely working on a topic for many years affects our considered judgments in SOME ways that don't add to their reliability.

By analogy (and this might apply to some of Antti's comments too), consider a successful and intelligent film critic who has been studying and writing about movies for forty years. Overall, I trust this film critic's reflective judgment about the quality of a movie more than the spontaneous judgments of an opening night crowd. At the same time, there are probably some aspects of the movie that, thanks to forty years of intensive film study and theorizing, the critic can no longer appreciate. You need fresh eyes to give a fair evaluation of these elements. If I were an outsider, or even the critic himself, I would want to know how the opening night crowd reacted to particular scenes in order to arrive at my all-things-considered judgment about the quality of the film. It might still turn out that I reject their verdict entirely. But I certainly wouldn't dismiss the reactions out of hand. Nor would I feel comfortable that my judgment was correct without knowing how other people responded to the film.

Antti, I think, then, that there's more common ground than appeared initially. First, if you don't want to call first impressions 'intuitions,' that's your prerogative, certainly. Second, I think everyone would agree that we shouldn't give the data--be they first impressions, second impressions, or anything else--more evidential weight than they deserve.

I'm not sure, though, about your claim that "it appears that many people are still willing to credit any old judgment with some evidential weight, no matter how shaky it may be." I don't know who these people are or what the threshold is for "some evidential weight." But the part about it being "shaky" and "passing a test" is itself of independent interest. How we know that a judgment is questionable is only by appealing to other judgments, themselves subject to scrutiny. The question, again, is what kind of scrutiny we want to subject them to, and one plausible answer for some questions is the test of widespread endorsement, constrained by the norms of the best social scientific practices.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


Disclaimer

  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.