Some Of Our Books

Categories

« Workshop on Theoretical and Practical Reasons | Main | Call for Abstracts: Bowling Green Workshop in Applied Ethics and Public Policy »

May 19, 2010

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Hi Jussi,

I take it that you are interested in our purported beliefs about "the moral status of the different qualities which the relevant actions might have".

Now you might be right about error theorists, but there seem to be lots of other cases in which changes in thinking result from appreciation of evidence.

Take my former belief that pain is bad. I read Dancy's Moral Reasons and, in part on the basis of the evidence he marshaled, changed my mind; I now believe that pain is sometimes good.

Call me a neophyte in this area, but I suspect the "ordinary belief" question leads to a dead end. In my understanding, a moral belief spurs action because "it is the right thing to do", whereas an "ordinary belief" seems to be more subject to the cost-benefit or risk-reward analysis of Rational Choice. Put in a different way, a deeply held moral belief should initiate action (or restrain action) without much or any conscious thought due to the strength of the conviction of the rightness of the action or restraint. Perhaps examining habituation might yield fruit, though.

Brad,

I wonder if your case is similar to the eating meat case I gave. Of course I shouldn't speak for what happened with you, but what happens to me often when I speak with Dancy is that I start from a belief that x is bad. He then gives me information about the situation in which x is the case (call this additional information y). As a result I find myself thinking that x&y is good. I still think that x(without y) is bad. In a sense then, I haven't changed my mind about x. I've just got an additional belief about the value of x in contexts I hadn't considered. So, it's not clear I've changed my mind about the goodness and badness of things given whole descriptions of situations in which those things are represented. Maybe this wasn't the argument you had in mind but I would be interested in hearing more about why you changed your mind.

MarcusAquinas,

I would be interested why you think that the ordinary belief question is a dead end. You might be right that the way ordinary beliefs motivate does not shed light on the question of what moral thoughts are. I was more interested however in another aspect of such thoughts - their sensitivity to evidence. Also, it's true that moral beliefs tend to initiate action without a conscious thought. This, however, is neutral between externalists who think that we need external desires to be motivated and internalists many of whom think that the moral belief itself can do the motivational work.

Jussi,

I think your description fits my experience up until the point where you say "in a sense, I haven't changed my mind about x." I started thinking that pain is always bad, so when I came to think that there are some circumstances in which pain is not bad, that amounted to changing my mind about the evaluative significance of pain.

I do not yet see how you resist thinking that. Perhaps your starting belief was not that pain is always bad? Or would you resist saying that you came to think there are some circumstances in which pain is not bad? Can you explain?

Also: I now have some doubts about this issue due to Mark S's discussion of how implication can affect our intuitions. This strikes me as further evidence that an untutored belief about the value of pain can be refined by the assessment of purported evidence.

Dear Jussi

Your line of argument reminded me of Hallvard Lillehammer's paper "Moral Cognitivism" in Philosophical Papers. He too starts from a consideration of the of the defining marks of beliefs (or cognitive states) and then tentatively asks whether moral thoughts possess these marks.

Regarding the error theorists, my guess is that they do think their moral beliefs are sensitive to thoughts about evidence for their truth/falsity, but are also sensitive to other considerations (e.g. practical use) which may be more weighty. But I suspect others are better qualified to comment on that.

Yours

Neil


I'm really struggling here how to put this, and the fact that what is at issue is an universal generalisation makes this even more complex.

I guess the idea is that there's no rational change of mind here of goodness and badness of pain in a case. Rather, we acquire a new belief of a case we hadn't considered before or when we acquire new information about it. It might be that we were already disposed to judge the case in a certain way. So, you might think that these dispositions are the core moral thoughts and they have remained the same.

You might be right that we do change our minds about the universal generalisation on the basis of the new beliefs of the new instances. Part of that might be explained by compositionality and how the universal quantification works in general. Of course one argument for cognitivism is that our moral thoughts are embedded in such structures, but you might think that for instance fictionalist non-cognitivists can explain this too.

I probably don't have to deny that moral thoughts have some functional features of beliefs such as the data you point to. I think it might be enough for me if it were the case that as a class, as a whole, our moral thoughts are not sensitive to tout court thought that there is sufficient evidence against the truth of all of these thoughts.

One analogy might be religion. So, you might think that a theist can change their might about how many miracles Jesus did on the basis of new information from the bible. To some extend, this makes her thought about the mirables belief-like. However, if she keeps these thoughts even on the face of there being sufficient evidence for such acts not being possible, you might think that this tells us that her thoughts are more faith that belief proper.

Neil,

thanks a lot. I definitely need to check that paper by Lillehammer as this just sounds what I had in mind.

I'm not sure that whether error theorists think that their moral thoughts are sensitive to evidence is relevant here but rather whether their moral beliefs (like everyone else's) *are* sensitive in that way.

Also, this might make their position incoherent. Either they would have to say that ordinary beliefs too are sensitive to also other considerations (such as practical benefit) to make the functional roles identical, or they would have to agree that ordinary beliefs and moral thoughts have different functional roles. The latter option would undermine their position as error theory requires that moral thoughts are beliefs proper.

Jussi,

Why not think, rather than that our moral beliefs are insensitive to evidence, that we simply tend to treat our reactive attitudes/pre-theoretic intuitions as evidence--evidence that overrides evidence of many other kinds? After all, isn't that what many moral philosophers explicitly do? They claim that certain intuitions, for example, are more epistemically viable than all the error-theoretic metaphysics you can throw at them?

That's one option. But even if you don't take this route, I'm not sure you've offered that large a bullet to bite. I am an error theorist. And, as you've suggested, I find I continue to have moral thoughts. I do not endorse these thoughts, and thus I tend not to consider them beliefs. In that case, I don't see the problem.

But suppose you insist they are beliefs. Then why not simply admit I'm irrational? This seems much better to me, since I would feel irrational. My considered belief in error theory is phenomenologically at odds with these moral thoughts. If the thoughts are beliefs, this feeling makes sense. Telling me that they are really just attitudes of some other kind doesn't seem to fit.

Hi Jussi. Just a quick comment that Horgan and Timmons look like they're following something close to, as well as something more than, the strategy you're suggesting, at least in "Cognitivist Expressivism," from their edited collection *Metaethics after Moore*.

Hi Jussi,

I think one confound here is that really radical error theories quite generally are difficult to believe. I might have the best argument in the world for mereological nihilism. Am I going to drop my belief that I exist? I suspect not. Cue lots of discussion about putative "Moorean" truths.

I think to make a case for a functional difference between moral thoughts and beliefs, you'd need not just a resilience to philosophical arguments in the former, but also show that it doesn't have precedent in the latter. And that strikes me as quite a hard ask.

But do you need this kind of radical proposal to get a clear case? Brad C gave one interesting case. But what about others: people changing moral views about e.g. distributive justice, or some hackneyed consequentialist/deontological test case, after reading things through, arguing with others, and all the rest? Or if not some profound disagreement, what about people changing their minds within broadly similar views---from maximizing to satisficing consequentailism, for example. (Or are these the examples you thought were rare? I dunno---I bet a bunch of people are agnostic about various hard cases, and would be swayed by what looks like evidence). It's not obvious in such cases that there's an invariant attitude to the moral status of certain properties, as you thought there might be with the vegetarianism example.

Jussi,

A couple quibbles about changing minds:

If we can accept that "In a sense then, I haven't changed my mind about x," we would have to doubt that any beliefs ever change. For example, imagine you told me that your house is painted red. I might respond, "nope, houses just can't be red; that's madness!" Then you produce a picture of your house, and of course, it's painted red. "Oh," I say, "I believe that houses can be painted red after all." Is it meaningful to say that I haven't changed my mind about houses(without evidence of red houses)?

I'm not really sure beliefs work that way. If I imagine the possibility of a belief that changes without any additional information (such as x changing without the addition of any y), it seems like I'm imagining randomness - as if x has changed for no reason. The implications of beliefs changing randomly is a little unsettling for me.

Jussi,it seems like the heart of the issue is whether our moral judgments are "systematically sensitive to our thoughts about the evidence for truth and falsity of those states". Whether individuals undergo changes in their basic moral judgments, and if so, how, is obviously relevant to the truth of that claim. You seem skeptical about such changes being accounted for in terms of individuals responding to evidence since, as you put it, "moral conversions are rare and not usually rational, evidence-based processes."

But I worry that this elides an important distinction between the mechanism of attitude change and the nature of the changed attitudes. It is possible to undergo moral conversion without having been moved by evidence. But this doesn't entail that the post-conversion judgments are therefore not beliefs or similar evidence-based states. They would be if the agent can give a rational account of the conversion. In other words (as I try to argue in one of my manuscripts), it's not how the change in attitudes takes place, which might be, as you say, not rational or evidence-based, but how the change is justified that matters here. The new moral attitudes might be first in the order of discovery, but after the conversion, second in the order of justification.

Wow. Thanks everyone to really good comments.

David,

the first point; that might be right. All I am interested here is the nature of the mental states that you call intuitions here. The second point; I don't quite see how you can be an error theorist and not consider your moral thoughts beliefs. If they are not beliefs, then you are not in error. Third point; I tend to think that assumption of rationality is a precondition of making sense of others - this just comes from the standard principle of rationality/humanity.

Dan,

thanks. I did have something like their position in mind. If I remember this right, they take the inferential role of moral thoughts as a reason to think that they are beliefs, but say that ought-beliefs also have an action-producing role which is the reason why they are not descriptive. So, the results may be slightly different but you are right they are thinking on the same lines.

Robbie,

It's true - Bart Streumer explicitly defends that line - he says he is unable to believe error theory and that's why there's no problem for him to keep his beliefs. Yet, all other error theorists have so far reported to believe their views. And, as mentioned above, some say they have given up their moral beliefs as well (albeit for prudential reasons not for evidence).

I certainly want to accept that people can change their views about ethical theories and the like on the basis of rational philosophical discussion. Usually though, I tend to think that this happens on the basis of coming across new cases in which the theories are applied, vividly placing oneself in the shoes of the people affected, and so on. I quite like what Hare says about this. The rational argumentation seems to begin from stable held basic convictions. It would be interesting to see how these change. Maybe there's a range of moral thoughts some of which are more belief-like than others.

James,

the house case seems right to me. Of course, you change your mind there on the basis of evidence. Maybe I don't see the analogy. I agree that beliefs shouldn't change randomly.

Michael,

that sort of gets to the hard of the matter. Most philosophers of mind classify intentional states by their causal roles - what kind of other attitudes cause them and what other attitudes they cause. I thought that the sensitivity to evidence would be a constitutive part of this causal pattern for beliefs.

You are right that there are normative theories of the nature of intentional states. Here what is constitutive of being a belief is how the state ought to react to other states. Part of this thought can be accommodated by the rationality assumption when we discuss the causal roles. And, there are people who defend this kind of views well. Yet, I quite like a paper by Copp and Sobel from a while ago in Analysis where they argue that such views will collapse to the previous views because of supervenience of the normativity.

Perhaps these error theorists could say that their continuing to hold moral beliefs is a form of 'rational irrationality'. They might say it is epistemically irrational, because these beliefs do not confirm to the evidence, but prudentially rational, because the costs of giving up the beliefs would be too great.

Hi Jussi. If I'm recalling the article right, H&G look at several features of belief, including their functional role(s) but also interestingly to their phenomenology. They then argue that moral thoughts are belief-like with respect to these roles, phenomenology, and other features.

Jussi,

Sorry, I should have been clearer. As an error theorist, I don't believe that any moral propositions are true. Often, though, I have "intuitions"--thoughts that things are good/bad/right/wrong. These intuitions present themselves as (or lead me to) propositions that purport to describe the world. That is, they are very much like beliefs. But when I have these thoughts, my attitude is that they are just that--thoughts I cannot keep myself from having; I do not endorse them or see them as epistemically supported in any way, and thus I am hesitant to call them beliefs. I take it, though, that most people do endorse these thoughts, and thus they are properly beliefs in others. Does that make sense?

I agree that seeing others as rational in general is important. But I think this is about how we see them generally; we might be able to isolate specific aspects of a person as irrational. So, for instance, suppose I learn that I (and others like me) have an attitudinal system that causes us often to form beliefs about a particular matter without any evidence. I don't think I would have any trouble making sense of myself or others like me if I just admitted that, with respect to that matter alone, we are prone to irrationality. This is precisely how I think of morality.

I think we need to be careful to distinguish one's evidence from one's beliefs about one's evidence: the error theorist can allow that she is often in possession of evidence that she ought to do one thing or another - she need simply add that she believes this evidence is all misleading. She is like someone who continues to be confronted with evidence that a stick in water is bent long after she has come to believe the stick is actually straight, and that she is experiencing an optical illusion. However, I believe this analogy breaks down precisely at the point when we ask whether the error theorist is really rationally justified in believing there is nothing she ought to do (I think she is not at all like somebody who has come to know that a stick is straight by some independent perceptual test).

Since I claim that a normative reason to X is evidence that I ought to X, it follows that I think one may have normative reasons to do things even if the error theorist is correct (i.e. even if there is nothing one ought to do).

Jussi presents the following challenge: "Imagine that I first think that eating meat is ok. I then get evidence that the animals I eat are hurt in the production. As a result, I come to think that eating meat is wrong. You might first think that this shows that my initial thought that eating meat is ok is sensitive to my thoughts about evidence. But, this doesn’t seem quite right. I’ve changed my mind about the qualities of certain actions (that one is eating meat of animals that suffered) but I have not changed my mind of the moral status of the different qualities which the relevant actions might have. And, it’s the latter kind of changes of moral thoughts that we would need to consider. Unfortunately, it seems like that kind of moral conversions are rare and not usually rational, evidence-based processes."

I argue against this sort of challenge in "Moral Skepticism for Foxes" (available on my website). Briefly, I think that what is wrong with it is that when we focus on our cognitive commitments (beliefs or credences) concerning how various non-normative properties and various normative properties coincide (i.e. on normative principles, in a broad sense), we will often find that our credence in the principle that pairs non-normative and normative properties is lower than the credence that simply goes with the reason/evidence we are immediately confronted with (assuming we are rational). For example: my commitment to the evidence that I ought to help Joe with the broken leg in front of me is stronger than my commitment to any particular moral principle which I could use to direct me as to what I ought to do in the present situation. (at the very least, this will often be the case for ordinary agents.)

Just a brief correction. It's NIETZSCHE'S moral falsehoods rather than mine which were supposed to be life-affirming. I don't mean of course that mine are life-denying, but the 'life-affirming' stuff is my attempt to characterize his ideology not mine.

Thanks again for comments everyone and sorry for lack of responses - marking time and everything. Here's few thoughts.

Campbell,

I think that's right. But there's a niggle. Many epistemically ordinary beliefs would be prudentially justified. It would do me wonders to believe I am a popular person. Unfortunately, ordinary descriptive cannot be kept merely in virtue of such prudential justification - they are just not responsive in that way. If moral beliefs can be, then there seems to be a difference between the two states.

Dan,

that's right. Phenomenology of beliefs is an interesting issue which hasn't been explored enough. Galen Strawson, for one, thinks that phenomenology of beliefs is crucial for the content of any beliefs.

David,

that makes sense. It seems to show that hermeneutic fictionalism is a true metaethical position of you in introspection. I would recommend Mark Eli Kalderon's book for further self-reflection.

Daniel,

This is interesting that you think that there can be normative reasons without oughts. Either evidence claims (for you normative reasons) are normative or they are not. If they are normative, then I cannot see how all the error theoretic arguments that apply to ought would not apply to them. If they are not, then you seem to be getting rid of the normativity of normative reasons.

I'll check that paper of yours as it sounds interesting.

Charles,

thanks for the correction. If that's right, then I do find the following sentence from your paper very misleading:

"So long as they [moral beliefs] are 'species preserving' and all the rest of it - which Nietzsche's overman ethic was supposed to be - moral beliefs need not be given up."

I find it hard not to read this as an endorsement. Maybe it would have been useful to flag it more clearly that you don't think this.

Jussi,

Brad had suggested that moral beliefs change when new information is added (x&y), and you noted that (x&y) doesn't mean that x(without y) has changed. My red house analogy is the same. I applied your rebuttal to a non-normative belief rather than a normative one, to show that belief x does - or at least can - change based on its context. In this case: in the context of y. Now, x might not change, but what it wont do is change without y; without new information. That would be a random change.

Maybe this can be said very compactly, as follows: beliefs require new information to change.

I hope that's clearer.

Jussi: reasons are normative, so far as I'm concerned. I didn't mean to be responding to or ruling out all error theoretic arguments (the paper is quite modest), but if reasons are evidence (in the precise way I favor), then such arguments would have extremely radical consequences if sound. I'm broadly sympathetic with Terence Cuneo's project in The Normative Web (even though I point out some dialectical problems for it in my book review): normativity governs belief as much as it governs action, so an attack on one front is often attack on the other.

hmh. I'm not sure I get this. You wrote:

"one may have normative reasons to do things even if the error theorist is correct (i.e. even if there is nothing one ought to do)."

I guess it's true that many views there can be evidence for false claims. So, there could be evidence (if evidence is understood in non-normative terms) for oughts even if error theory was true (in which case there couldn't be evidence if that's a normative notion). Maybe there could even be conclusive evidence for an ought for some agent in this situation.

On your view, this conclusive evidence translates to conclusive reasons to do x. What I don't understand is (setting enticing reasons aside) how there could be conclusive reasons to do something even when it's not the case that one ought to do that thing. I know this possibility follows from your view (as you wrote), but this seems count against view. If we talk about normative reasons, it seems to be part of what they are that they require actions. Your view, if what you write is part of it, seems to take this requirement away. This is why I'm worried about your view taking away the normativity from normative reasons.

Of course, if you understood evidence in normative terms, then the problem goes away. In that case if error theory were true, there wouldn't be evidence and thus no reasons either. This seems like a right kind of radical consequence of error theoretic arguments. But, then I wonder how normativity of evidence can be understood independently of a notion of reasons. If it cannot, then the basicness of reasons is retained.

Jussi: If there were no true ought propositions (a claim I am admitting is philosophically coherent, even though I actually think it is necessarily false), then there would be no conclusive reasons. Why would there be no conclusive reasons? Because then there would be no conclusive evidence. Why? Because conclusive evidence is factive (i.e. if there is conclusive evidence that P, then P). Stephen and I say this in our OSME paper (when considering the reason implies can objection). Still, there would be reasons/evidence that I ought (in worlds like ours, insofar as they have people in them etc.).

I think your worry is that these reasons wouldn't be normative. However, ought propositions, whether true or false, are normative propositions. What it is to be a normative reason is just to bear the right relation to an ought proposition. On our account, that is an evidential relation.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


Disclaimer

  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.