Some Of Our Books

Categories

« There are unknowable moral truths | Main | Souper Top Tenners »

August 14, 2010

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Dave,

I don't really have an argument against your position; I think it just comes down to how we understand what a belief (or a judgement, or a view) is. But I, for one, like to keep these concepts intellectualized. On your view, it will turn out that no "practically inapplicable" views have proponents. No one is really an error theorist, a Cartesian skeptic, a solipsist (probably), etc. But it seems like each of these views might be true, and thus that we should not, by conceptual fiat, rule out the possibility of coming to hold them.

Nevertheless, I agree that space should be made for talking about belief-like (judgement-like, view-like) attitudes or mental states that play the deliberative role beliefs usually play. I like to call them 'beliars'. They are, obviously, quite common among philosophers who hold counter-intuitive views. But I think they are found elsewhere, as well. For example, I have certain OCD-like tendencies. One manifestation is that I feel compulsions to do silly things, like avoid cracks on the sidewalk. But, phenomenologically, these compulsions do not present like mere urges or even desires. They often present as belief-like, or intuition-like. There is a sense that failing to crack-walk might have dire consequences. But even when I act on this beliar, it still seems to me (phenomenologically) that I am do something different than when I deliberate using a proper belief. A committed consequentialist might say the same about his day-to-day deontological moral behavior.

Hi David,

I posted something related to this at prosblogion a few weeks ago. Here's a strange fact about this sort of case. I'm pretty sure that any avowed consequentialist will also admit that, for sure, he will in the near future fail to fulfill a consequentialist requirement to perform an action A such that (i) his commitment to consequentialism won't have lessened (ii) he has no justification for failing to perform A and (iii) he has no mitigating reason for failing to perform A. If all that's true, then I don't think you're genuinely a consequentialist. It amounts to asserting that I'm a consequentialist but my commitment to consequentialism is not so strong that I do not foresee my own unmitigated non-consequentialist behavior. The problem of course generalizes to other moral views.

Mike,

I will admit that, for sure, I will in the near future go to bed later than I believe I ought to, though (i) my commitment to the idea that going to bed earlier would be best for me will not have lessened, (ii) I have no justification for failing to go to bed earlier and (iii) I have no mitigating reason for failing to go to bed earlier. I don't think this means that I don't really believe going to bed earlier would be best for me. It just means that I'm suffering from weakness of the will. We all recognize that immediate desires can lead us to act irrationally. But surely deontological attitudes can be just as powerful motivators as desires. So why can't the consequentialist just appeal to weakness of the will as I have?

Maybe you in-between believe in consequentialism:

http://www.faculty.ucr.edu/~eschwitz/SchwitzAbs/ActBel.htm

I think David F. is right to say that what we want to say here depends in part on how we understand what a belief is. On a widely accepted view in phil mind, being a belief with a certain content depends on its functional role, i.e. what stimuli causes it and what it causes--where the latter includes both other beliefs and (together with appropriate desires) actions.

So, here's one way to think of the puzzle. Suppose functionalism is true and that acceptance of a moral theory is a matter of having a belief in the theory's truth. On those assumtions, it's easy to see how someone could be an amoral consequentialist who never does as consequentialism requires; she would be someone who has no desire to do what she regards as right. The puzzling feature of the case Sobel (and Wall and Jacobson) are interested in is that it involves someone who does have a desire to do what's right. Appropriate belief/desire combinations need to be compatible with some weakness of will (as in DF's not going to bed early case). But if we think belief is given by its functional role and that there is an appropriate desire, we're going to have to posit an incredibly high degree of weakness of will in their case to explain the behavior pattern and hold onto the belief and that's hard to square with functionalism. Some folks think functionalism isn't the truth about all our mental states (most famously the phenomenal ones). But it's pretty widely accepted for beliefs.

I don't think this means that I don't really believe going to bed earlier would be best for me. It just means that I'm suffering from weakness of the will.

Dave (and Dave), Weakness of will is a mitigating reason, so it's not the sort of case I'm describing. You will fail to live up to consequentialist standards even when it is not a case of weakness of will. There will be cases in wihch you won't even try to live up to those standards. It's not true that every time a conseuqnetialist fails to live up to the consequentialist standard he is suffering from weakness of will. Sometimes he just can't be bothered. Here's a useful example. I believe that it would be best overall if I get off the sofa now and mow the lawn, but I'm not doing it. My failing to do so is not a matter of weakness of will; I'd have no difficulty doing it if I were suffficiently committed to my principles. It's a matter of not having that sort of commitment to consequentialism. I think that's true for every consequentialist (non-consequentialists, too).

No doubt what it takes to have a belief in an ethical theory will have something to do with what it takes to have a belief generally. But part of my interest in these questions stems from the thought that how connected belief in ethics is to action and emotion (at least in the moralist—that is in the person who is not an amoralist) has to do with the topic of the belief. I think it makes sense to think that whether someone really loves somebody is more connected to action and emotional responses that whether someone really accepts the truth of relativity. Partly this seems assured because there are so few actions and emotions which are especially fitting with the belief in relativity whereas we can say a lot about what actions and emotions are fitting with being in love with someone. So the “intellectualist” view seems more plausible when the subject is whether I count as believing in relativity and less plausible about whether I am in love with someone. At any rate this seems plausible enough to me that I am not sure Faraci can rightly claim my view commits me to denying the existence of Cartesian skeptics.

A person might dispute this by saying that if I really believe in relativity I would be expected to take bets which offer things I want as a prize if relativity were true, or at least to regret failing to do so. I have arguments elsewhere against direction of fit accounts of the difference between belief and desires and Copp and I there offer arguments against the idea that all desires or beliefs should be expected to show themselves in betting behavior in this way. However, I would need to think more about whether our arguments count as a decent reply to the reply I consider here.

One thing to notice here is that the phenomenon I mean to draw attention to is different from ordinary weakness of will. In the moral case it is not just that I don’t do as I think I ought by the lights of the theory I claim to accept, it is that I also don’t get worked up about failing to do so.

It occurs to me that my most recent post is perhaps guity of confusing what it takes to be in love with someone with what it takes to believe that one is. I'll think more about that when I get a moment.

I think that we can explain the fact that David's moral theory fails to govern his actions or emotions by the fact that his moral theory conflicts with his theory about what he has most reason to do, all things considered. I believe that David generally acts as he believes that he ought to act, all things considered. So his actions are governed by his theory of practical reasons even if they aren't governed by his moral theory. So we don't have to appeal to "an incredibly high degree of weakness of will" in his case, as Jan suggests. And I suspect that he generally feels guilty when what he does is both wrong and contrary to what he has most reason to do, but that he doesn't feel guilty when he does wrong (by the likes of his moral theory) but has most reason to do so (by the likes of his theory of practical reasons). So I don't see what the puzzle is here. Why should David or anyone else expect his moral theory to govern his actions and emotions when David doesn't believe that his moral theory should govern his actions and emotions? After all, David has written articles expressing his belief that he doesn't always have most reason to act as morality demands and that it would be strange to blame someone for acting as he has most reason to act. So should we really expect a moral theory to govern someone's actions and behavior when he himself doesn't believe that they ought to? Now, of course, Dave's question is whether he counts as being a consequentialist if this moral theory doesn't govern his actions and behavior. And I don't see why not. We, as opposed to him, may believe that the true moral theory ought to govern people's actions and behavior because we believe a person can truly be morally required to act only as she has most reason to act. But Dave has expressly said that he doesn't believe this.

Doug,

Thanks, I wondered about that direction too. But I think I disagree that my subjectivism is playing an exciting role here. First the phenomenon I mean to be calling attention to is, I believe, rather common for consequentialists whether they are subjectivist or no. Second, I do not think that the moral theory that effectively modifies my action and emotions is more closely tied to my own theory of reasons than consequentialism. Well, at least that seems clearly true until you bring in legal sanctions--then perhaps the more deontological morality would have some extentional advantage. But the deontological morality effectively governs my interpersonal interactions and emotions where legal sanctions are not at all salient.

Instead, I would hazard a guess that it is more difficult to reorient one's moralized emotions around an intellectually favored moral theory (when it conflicts sharply with aspects of commonsense) than we suppose. I think it is the weight of my moral upbringing together with social sanctions imposed by my culture in favor of commonsense morality that are the better explanation than my subjectivism.

Dave's question is whether he counts as being a consequentialist if this moral theory doesn't govern his actions and behavior. And I don't see why not. We, as opposed to him, may believe that the true moral theory ought to govern people's actions and behavior because we believe a person can truly be morally required to act only as she has most reason to act. But Dave has expressly said that he doesn't believe this

Isn't the question whether it's true that he's a consequentialist, not whether he can consistently believe it? Smith might believe he's the best arm wrestler in the room, and at the same time lose to everyone in arm wrestling. Do we want to say that he still counts as the best arm wrestler since, by his lights, even someone who loses all the time can be the best.

Brad C: thanks for bringing that to my attention. One of the things I was hoping would happen here is that I would learn that others have already thought about such issues so your pointer is just the sort of thing I was hoping for. Look forward to getting to the Schwitzgebel paper.

Mike: have you written on these issues outside of the blog you mentioned?

I perhaps should say that there are general issues here that I am largely hoping to work around. I have some fear of flying and perhaps allow this to affect my actions (maybe reasonably--irrational fear is bad to experience too). But I think it clear that I count as believing that flying is safe. But if I felt fear when loved one's flew as well and if I urged them not to fly in the same way I sometimes urge myself not to fly, then I would start to think it a more interesting question whether I really believe that flying is safe. In other words, the disconnect between action and emotional responses on the one hand, and belief on the other in the ethical case here is more significant than the fear of flying case.

One more random thought: I would assume that best functionalist story about belief would give some role to both the intellectualized (what do I say in seminar) side of the story and the action/emotion side of the story. So if there were cases where there is no action/emotion side to the story, the intellectualized part would carry the day about what the agent belives. But in the interesting cases where there is a conflict, I am positing that at least in the case of moral beliefs, the action/emotion side of the equation is the more weighty part.

Hi David,

First the phenomenon I mean to be calling attention to is, I believe, rather common for consequentialists whether they are subjectivist or no.

But isn't it also rather common for consequentialists (subjectivists or not) to think that they have sufficient reason to act in ways that consequentialism deems immoral? Indeed, I can't think of any consequentialist who has explicitly claimed or argued that we always have decisive reason, all things considered, to act as consequentialism demands, and yet I can think of many who have explicitly claimed and/or argued that we often lack decisive reason to act as consequentialism demands. So I take it that the phenomenon that I'm talking about (the phenomenon where the consequentialists takes there to be sufficient reason to act immorally) is quite common, and may be just as common as the phenomenon that you're talking about.

Second, I do not think that the moral theory that effectively modifies my action and emotions is more closely tied to my own theory of reasons than consequentialism.

Could you give some examples. When it comes to not giving enough money to charity, isn't that more in accord with your theory of practical reasons than it is with your consequentialist moral theory? On your theory of practical reasons, you do have sufficient reason to give precisely as much money to charity as you do, right? So there's no weakness of will there, right? But, on consequentialism, the amount that you give is grossly inadequate, right?

Dave,

My interest in the question has more to do with understanding hypocrisy. Does it count as hypocritical that I avow consequentialism and just fail to live up to it? I'm not sure. I think it's related to your concern which seems to be whether you should keep calling yourself a consequentialist. But I've nothing written up.

I think your worry sensible. There is surely something wrong with a moral theory which does not guide the actual behavior of even its most famous advocates and which entails that the best people we know are scarcely better than the worst people we can think of.

To be fair to consequentialism and Singer, I can't think of any moral theory that has an advocate that meets it demands and consequentialism says nothing about how to rank persons from best to worst (or how to measure the distance better and worse persons). Gotta run. I sense that someone's bad mouthing consequentialism in some disreputable corner of the internet...

Doug,

Well I think most people working in normative ethics likely lack a view about whether one always has most reason to act as morality requires. I think Sidgwick thought that one would never be irrational to do as C recommends. I don't think of Mill as having a view on the question. Even if there is some tendency for contemporary C's to be more likely to think that it can be rational to be immoral, I do not know that this is a majority view among C's. Do you think it is? Even if it were a majority view, I still think the attitudes I talk about are likely common among those who take no stand on this question or who are rationalists.

It is notoriously hard to say what my view of reasons claims that a specific person has most reason to do. Nonetheless, I see your point that insofar as we are comparing C to a moral theory that lets me do as I like much of the time, C is more likely to conflict with my theory of reasons in a broader range of cases. However, since I claim that some of my concerns are to treat people morally, deontological theories will not scratch that itch (I say). If we compare the different moral views where they both give specific instructions (and the costs to me are otherwise low) such as whether to throw the switch in the trolley case, then I think non-C views will give answers that address my concerns less well in most cases.

Mike,

On the question of hypocrisy, it seems to me at least that to be the ordinary kind of hypocrite the person must think that they have most reason to do as morality commands (and perhaps have a tendency to negatively evaluate others for similarly failing to act according to C—something I doubt is common in those who think of themselves as believing C). Further I guess I want to say that it would be something bad that I don't have a name for to change one's view of what morality requires simply to make it the case that one counts as living up the morality one believes in.

Tomkow,

I myself think the criticism you offer is not very telling. Notice that I was not offering an argument against C, only saying that perhaps it takes more to count as beliving it than many of us are bringing to the table. As I said, I think of myself as persuaded by the arguments I have read that C is the best moral theory out there. That few are living up to it is not a refutation--and if it were then there would be periods in human history that would refute any sane ethical theory. Further, there are whole groups (such as perhaps in the Philippines, where people can come to think of themselves as their brother's keeper). C does not ask something that is incompatible with the human frame--for it obeys ought implies can.

On the question of hypocrisy, it seems to me at least that to be the ordinary kind of hypocrite the person must think that they have most reason to do as morality commands...

In order to keep the situation you're describing interesting, at least as I see it, you've got show that the problem is independent of much of what has been said by way of mitigation. Of course there are cases of weakness of will, and probably that explains some failures to follow C and (I guess) there are cases where 'overall practical reasons' recommend against C and that explains some failures to follow C. But apart from those cases (however extensive and mitigating they are), you've got the common occurrence that C-agents fail to live up to C and the explanation can't be chalked up to weak wills, overall practical reasons, etc. And it can't be chalked up to obvious insincerity in avowing C. The explanations in many cases make the agent look culpable for failing to live up to C. The failure is explained by things like not feeling like doing what is required, or not being in the right mood to act morally, or not feeling particularly generous, etc. It's an empirical question, but I'm sure there are lots of cases like this where otherwise decent consequentialist agents give a pass to pretty clear violations of C. There seems to be something hypocritical (or hypocritical-like) in both avowing a commitment to C and giving a pass to these sorts of violations of C, even if the agent is not obviously insincere in avowing C. I guess it could be some other moral failing. I'm incidentally not picking on C or C-agents, I think the problem is broader.

Mike,

Maybe the place to start is to wonder what "hyocritical" is adding to our assessment of a person who we already understand as not living up to their own moral view. Obviously the word hypocritical adds a negative evaluation of that. To my ear, one is hypocritical if one has a kind of double standard, judging others by one set of standards yet living by another. Since I think most C's would not be any more harsh on others for failing to live up to C than they are on themselves, it was not clear that this additional bit is in place.

Additionally, if we include in the set up that the agent not only thinks morality requires that they 0 but she also thinks that she has most reason to obey morality in this situation (and it seems to be the belief rather than the fact that is relevant here), and yet they fail to 0, then I am thinking they cannot avoid the charge of weakness of will.

Maybe the place to start is to wonder what "hyocritical" is adding to our assessment of a person who we already understand as not living up to their own moral view.

Isn't it hypocritical to claim to be a moral vegetarian, say, and to be a carnivore 3 days a week? It's expressly taking the high moral road, but living on the low road a lot. But it gets much less clear when the demands get higher. For some reason, I don't think it's hypocritical when a moral vegetarian chooses not to purchase every item he owns from other avowed moral vegetarians.
Maybe this only makes things more murky. If so, ignore it. I'm wondering whether C's demands are sufficiently high that when C-agents fail to live up to C all the time, they're a little like the moral vegetarain who is not bothered at all by making no effort to economically support other moral vegetarians.

Hi David -

Late to the party, so apologies if I'm duplicating.

I think Doug is on to something. I think we should draw a distinction between your "moral theory" and your "what matters most" theory. I'm tempted to think that my emotions, reactions, etc., etc., really reflect my theory of "what matters most"; but since I'm always spouting off about how great consequentlialism is, my theory of what matters most does not include (or includes imperfectly) my moral theory. A consequentialist may not realize that their theory of what matters most includes morality only imperfectly, but insofar as non-consequentialism governs her actions and emotions, and she is explicitly committed to consequentialism as a moral view, why not leave this possibility open?

Dale, (and Dave and Doug),

Presumably, what determines what counts as one's theory of X is static across areas of inquiry. If non-Consequentialism governs Dave's actions and emotions, then that should be relevant to the determination of either both or neither his moral theory and his theory of "what matters most." Of course, we might say that it's a fallback: If you profess a particular theory of X, then that's your theory. If you don't, then we "read" you theory off of your emotions and actions. This move seems suspect to me. But, in any case, so long as there is an avowed Consequentialist who professes to be a Consequentialist "all the way down," we can raise Dave's question for her. And it would seem ad hoc to grant Consequentialism as her moral theory but insist that non-Consequentialism is her theory of "what matters most." What could justify using different criteria for determining "her" theory in each case?

I suspect that such people exist and that their (and even Dave's) lack of C-action is (as Dave suggested) more about emotional override than theoretical tension. I can cite my own case as some evidence. Commonsense morality is frequently a guide for me--even one that I actively care about. But as a Normative Error Theorist, there's nothing in my professed theory of "what matters most" (nothing) or my moral theory (nothing) that could clash! So the question (as Dave originally asked) seems to just be whether I count as an Error Theorist at all.

Mike,

I apologize for my breach of etiquette in not replying to you first; I wasn't quite sure what to say about weakness of the will.

In your lawn-mowing case, what do you mean when you say you're not sufficiently committed to your principle? Do you mean that you suspect it might not be true or merely that it doesn't motivate you? If the former, then I'm not sure that most Consequentialists do have that problem. If the latter, then I think that's what I was thinking of as weakness of the will (perhaps, as you point out, inappropriately). But what we call it doesn't matter. I profess to believe X, call myself an X-ist, but fail to take X-actions. The question is whether I'm really an X-ist.

One way of reading this question (the way I was) is as asking whether being motivated to take X-actions is relevant to whether I really believe X, under the assumption that whether I'm an X-ist is determined by whether I really believe X. In some cases, though, whether one believes X or professes to be an X-ist isn't relevant to whether one is an X-ist at all! The examples you've been citing (vegetarian, champion arm-wrestler) seem to be of this kind. So I think there are two questions. First, is being a Consequentialist like being a champion arm-wrestler or like being an Atheist (or something else which just depends on what you believe)? If the former, then it seems clear that Dave (and likely everyone else) is not a Consequentialist. If the latter, then we return to the question of whether motivation to act in certain ways matters for belief-ascription.

A consequentialist may not realize that their theory of what matters most includes morality only imperfectly . . .

Maybe I'm alone in finding this puzzling. If your theory of "what matters most" includes morality only imperfectly, isn't that a bit worriesome already? I still think of moral reasons as (just about definitionally) the weightiest practical reasons one might have, so I don't get the idea that an agent might be in a position to relegate moral reasons to 'kinda mattering'. How does one get to do that? Point me to something that might make this (mildly) credible.

FWIW, Like Dale, I think Doug is onto something, though I suspect we need some additional resources to tell the whole story. What makes the story harder to tell is (among other things) that you (David S) are already rather sophisticated and aware of the possible tensions in your own psychology. I'd normally be tempted to say that people such as yourself both believe and disbelieve the same theory, but not in such a way that you are in a position to recognize that. But the fact that you yourself raise the worry makes the last part less plausible about yourself.

Just in case restating my own puzzlement is a fruitful contribution to a conversations . . .

Dave, maybe you believe C but do not alieve it?

(I don't think it quite fits, really, but something in that neighborhood.)

Hi David, I'm late to the party too but I find this extremely interesting--and it resembles my own experience as well. I think it might be helpful to separate actions and emotions. As many have pointed out, because of weakness of will, virtually no one can say that their actions match up with the ethical theory that profess to believe in or accept. But it's more common for emotions, especially more reflective second-order emotions like approval or disapproval or remorse to reflect our professed theories. Your description makes it sound like consequentialism doesn't capture your considered feelings or intuitions about what you ought to to do.

Mike brought up vegetarianism and it's a good example. If I believe I ought to be a vegetarian I may nevertheless eat meat on occasion due to weakness of will. But if it doesn't bother me at all afterwards, if I feel no guilt and even approve of my action upon reflection, then I'm inclined to say that I don't really believe I ought to be a vegetarian.

Tamler, I clearly need to read your paper that Jamie brought to my attention. I especially agree with what you say in the second paragraph above when there is another set of norms that does govern my actions and emotions and that other set has a sensible claim to count as a moral view. I think it sane to imagine an amoralist who sincerely thinks Theory X is the truth about morality, yet who thinks morality not worth caring about. I am less tempted to say that such a person does not count as believing Theory X than the person who has a rival moral sense that controls their action and emotions in the way we traditionally expect morality to do.

Tamler,

Very sorry, after a cup of coffee I realized I was confusing you with Tamar. I don't know why I every try to do anything before coffee (except make coffee).

David, thanks. One point of clarification--the paper Jamie referred to you is by Tamar (Gendler) not Tamler... (Though I wouldn't mind a J-Phil publication--I hope my tenure committee makes the same mistake.)

No problem, and I agree that the vegetarianism case differs from the amoralist one. I'm assuming the person who claims to believe in vegetarianism cares about morality in general. Once you throw your lot in the morality game (as the amoralist does not), then I think moral beliefs are more closely tied to emotions. On the other hand, I imagine someone could be a selective amoralist...

Dale,

If I am getting you, I don't think that explains good parts of what I have in mind. As I see it, it is not just that I have private concerns which, by my lights, are more important than moral concerns and this explains the disconnect between my moral view and my view of what matters (to me). Rather, at least it seems to me, I would feel way more actual emotion guilt about tipping poorly a good waittron than failing to donate so that people in places with dirty water get Oral Rehydration packets. You might try saying that this is because I care more about not being seen to be a poor tipper or whatever but I don't think that is really what is going on. When I feel such emotions (and I really hope and suspect this is common) it has the guise of a moral concern--the pang is the pang of moral failure (not just the vanilla pang of failing to promote my desires).

I'll try to think more about this as several folks are pushing in this direction. I really hope my experience here is not so specific to me as to be explained by an attraction to subjectivism. If that were the case, I would just be wrong about what I take to be a somewhat common aspect of our moral experience--something that I would have assumed was common to most consequentialists regardless of their other allegiences but broader than that.

I'm not sure whether you are a consequentialist. What should we say about the person who intellectually rejects consequentialism, but on different grounds acts in ways that do in fact maximize good consequences? I'm tempted to say that such a person is not a consequentialist, despite the fact that her acts meet with consequentialist approval. This prompts me to say that you are indeed a consequentialist. But perhaps we should reserve that term for only those whose intellect _and_ whose activity conform to the theory.

I think would-be Christians have similar debates about what it takes to count as a Christian: belief? faith? works? love? Maybe the answer depends upon whether you would maximize good consequences by self-identifying as a consequentialist.

Eric,
That is interesting but I want to hear more about the case. I would think the best case to think about is not just one where the person intellectually rejects C but happens to in fact maximize goodness, but rather the case where she intellectually rejects C yet has both her actions and emotions non-accidentally attuned to whether in their own mind they are serving C or not. That is, they tend to feel real guilt (and not just say they are guilty) when they fail to serve what they think C requires and this tendency is quite robust. If that is the case we are considering, what would you say about it?

David,

You seem to be comfortable with the idea that "the best moral theory out there" is one "few are living up to". Apparently you are willing to cut most people, including yourself, quite a bit of slack when it comes to "living up" to that best moral theory … "weakness of will", &c.

But I'm guessing that there are differences in how much slack you are prepared to cut people, differences that don't have a consequentialist justification. Thus I'm guessing you think people who don't feed their own children are somehow worse than people who don't feed -- via charity-- other peoples children?
But how worse?


The discrimination can't be justified on consequentialist grounds so I take it you don't think it a moral difference. What kind of difference is it?

David,

I think the case could go in a number of different directions. Here's just one case: suppose God is a consequentialist, and commands Philip to V, on the grounds that V-ing maximizes good consequences. Suppose that Philip does not self-identify as a consequentialist, but instead as a divine command theorist. Finally, suppose Philip does not believe that God is a consequentialist. Philip Vs simply because God told him to. I don't think Philip is a consequentialist who has failed to admit this to himself, not even if this takes place again and again. Philip is not a consequentialist at all.

It is trickier to describe a case where Philip does believe that God is a consequentialist, but that the rules for God are different from the rules for man. (Cf. Rawls in "Two Concepts..")

Hi David,

Would you argue as follows? Doug's actions and emotions are not governed by his legal positivism conjoined with his views about the content of the law. For instance, Doug knowingly breaks the law on a daily basis in that he often exceeds the posted speed limit and often goes through various stop signs without coming to a complete stop. He has no negative feelings regarding law-breaking of this sort so long as the law-breaker exercises due caution, as he does. Also, Doug used to live in South Carolina at a time when its constitution prohibited interracial marriages. Yet Doug had no negative emotional reactions to those who broke the state's anti-miscegenation laws. Thus, we have to wonder whether Doug really is a legal positivist who believes that the law requires him to drive no faster than the posted speed limit, that drivers must come to complete stop at stop signs, and that miscegenation was prohibited in South Carolina prior to the amendment of its constitution in 1998.

I assume that you wouldn't argue in the above fashion. So why would you argue, in like fashion, that you aren't really a consequentialist who believes that everyone is morally required to maximize the good (impersonally construed) given that consequentialism doesn't govern your actions and emotions. What is, by your likes, the relevant difference between the law and morality that makes the one but not the other line of reasoning acceptable?

It seems to me that when it comes to whether you believe some proposition (even a normative one) what's relevant is whether you're disposed, say, to use it as premise in your reasoning and to assent to in certain relevant contexts, not whether you're disposed to live by the norms it expresses. After all, presumably the criteria for believing should be the same across all propositions (normative or non-normative).

On the other hand, consider Mike whose actions and emotions are not governed by his impartial concern for the well-being of Jane. For instance, Mike slaps Jane in the mouth each time she utters a word he does not like. He knows that does not display impartial concern for Jane. He pokes her with a knife when she fails to ask permission to talk. He knows that does not display impartial concern for Jane. He chokes her each time his dinner is late. He knows that does not display impartial concern for Jane. Indeed, he knows that all of this behavior displays a complete lack of concern for Jane's well-being. But he does all of this without emotional qualm. He has no negative emotional reactions at all to his own conduct. Thus, do we have to wonder whether Mike really does have the impartial concern for the well-being of Jane? I'd say, yes, we should be wondering that.

David, it strikes me that your position here parallels Strawson's in Freedom and Resentment. According to Strawson, incompatibilists hold IN THEORY that the determinism rules out moral responsiblity--and yet when we look at our responsibility-related actions, practices, and sentiments, the truth of global determinism is completely beside the point. So he accuses philosophers who develop their incompatibilist theories of "overintellectualizing the facts." And then he writes:

"Perhaps the most important factor of all is the prestige of these theoretical studies themselves. That prestige is great, and is apt to make us forget that in philosophy, though it also is a theoretical study, we have to take account of the facts in all their bearings; we are not to suppose that we are required, or permitted, as philosophers, to regard ourselves, as human beings, as detached from the attitudes which, as scientists, we study with detachment."

Perhaps a lot of this could apply to philosophers who develop theoretical defenses of consequentialism in scholarly journals but whose actions and sentiments reflect core non-consequentialist commitments.

David,

It just occurred to me that you could cut through some of these worries by distinguishing between being a consequentialist (as in title of the post) and believing consequentialism. You (probably) could not be a consequentialist without having (some) impartial concern for others. That's what it is to be a consequentialist. But (again, probably) you could believe consequentialism without having the relevant concern. That is, you could believe consequentialism without being one. Though, the idea of a purely theoretical consequentialist is pretty creepy.

Hi Mike,

Yes, we certainly have to wonder whether David cares about acting morally as much as he does, say, about his material comfort. As a matter fact, I think that it's clear that he doesn't. So, perhaps, David is a consequentialist who doesn't care as much about acting morally as he does about some other things. It seems to me that he, nevertheless, counts as a consequentialist (i.e., as someone who believes consequentialism).

Tomkow,

I'm not sure what kind of slack you mean to be saying I am cutting people like myself who do not live up to our own moral theory. I think I am acting wrongly, but I don't get very worked up about that. Perhaps by slack you mean not getting in my and other people's faces about it?

I am not sure about the claim that there is no good C based reason to be less happy with the person who does not feed their own child. Such a person is a danger in a wider range of contexts given our established practices. If things were ok elsewhere such a person would still create harm whereas the person who does not give would not similarly be a harm in such contexts. Also, I am confident it would be counterproductive to express the same degree of anger at the people who fail to give to charity as we do to the people who do not feed their own kids.

Doug,

Keep in mind that I stipulated that we are dealing with someone that is not an amoralist. And keep in mind that I wanted there to be a rival moral theory that we might attribute to the person--a rival moral theory to what they say in seminars which guides their action and emotions in much the kind of way that we tend to expect a (non a-moralist) person's moral views to guide their actions and emotions. I don't see any such parallel in your Doug example.

Tamler,

Thanks, that is a very useful point of comparison. We are likely to see just such a similar sort of divide between a person's intellectual take on the question of determinism and her emotional reactions.

Doug,

Reconsider the 'Mike case' above and suppose he claims to believe that his wife Jane deserves special, favorable treatment. The case then goes,

. . . consider Mike whose actions and emotions are not governed by his belief that his wife Jane deserves special, favorable treatment. For instance, Mike slaps Jane in the mouth each time she utters a word he does not like. He knows that this does not display special, favorable treatment for Jane. He pokes her with a knife when she fails to ask permission to talk. He knows that this does not display special, favorable for Jane. He chokes her each time his dinner is late. He knows that this does not display special, favorable for Jane. Indeed, he knows that all of this behavior displays a complete lack of concern for Jane's well-being. But he does all of this without emotional qualm. He has no negative emotional reactions at all to his own conduct. Thus, do we have to wonder whether Mike really does believe that Jane deserves special, favorable treatment? I'd again say, yes, we definitely should be wondering that.

Unless there is some so far unstated explanation for why Mike's behavior is so wildly inconsistent with his professed beliefs, the best explanation for what he is doing is that he does not actually have the belief he professes to have.

David,

When you say that there is a rival moral theory (a rival to consequentialism) that "guides"/"governs" your actions and emotions, do you mean that you appeal to that theory as a sort of decision procedure or merely that there is some substantive theory which your actions and emotions conform to? If the latter, why do you think that, in my legal case, there can be no rival theory (not even a deeply pluralistic one) with which my actions and emotions conform? If the former, then I'm puzzled by your initial description of your psychology which cited your conscience as your guide. I was assuming that you don't actually say anything like the following to yourself "lying for the greater good would be wrong, so I won't do it". Rather, I assume that you've just been inculcated with a disposition to avoid lying and feel bad when you do, not because you believe that it's wrong or doesn't accord with some deontological theory's norms but because you've been socialized that way.

And there's no reason why we can't assume that in my example I have some motivation to obey the law. It's just a weak one that's often overpowered by the motivation that I have to do what's best in terms of my self-interest.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


Disclaimer

  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.