Some Of Our Books


« Rights and Intrinsic Properties | Main | Parfit on Normative Irreducibility »

July 16, 2007


Feed You can follow this conversation by subscribing to the comment feed for this post.

Hi Heath,
The first premise of (i) is tendentious. Is the main question of ethics how to live/what to do, or how one *ought* to live/what one *ought* to do. Those who think there is a significant difference between moral reasons and the others will say latter. So then you either need an argument that the question is(au contraire) the former, or you'll need the premise that there is no serious distinction between moral and non-moral oughts. That question is primarily about whether there is a distinction between moral and nonmoral reasons, which is what you're contending.

Though I'm somewhat sympathetic to these doubts, it does strike me that there may be some strong countervailing reasons to take the moral/non-moral distinction seriously. For one, we tend to believe that many of the reasons we'd be inclined to call moral reasons have a certain normative priority over the reasons we're inclined to call non-moral reasons. I take it that the fact that these reasons are moral reasons, as opposed, say, to prudential reasons, is supposed to function in an explanation of that priority. If we're to reject the distinction (or at least the seriousness of the distinction) between moral and non-moral reasons, then it seems to me that we'd better either have some alternative explanation of the priority of these reasons (the one's we're pre-theoretically inclined to regard as 'moral reasons') or a persuasive argument to the effect that this priority is in fact illusory and hence not in need of explanation.

So while I don't know even know where to begin to answer your first question, I wonder whether we can't answer the second like this: we'd like to know which are the moral reasons, as opposed to the non-moral reasons, because, insofar as we're good people, we'd like to give appropriate priority to the former over the latter.


I'm less than entirely clear on how to draw the distinction. It might have something to do with impartiality, or other-regardingness, but both of these are tendentious. That there is some such distinction, though, strikes me as pretty plausible.

Why make it? The biggest gain, as I see it, is that it allows for a kind of external analysis of morality that would otherwise be somewhat difficult. Suppose you think that morality is, from the standpoint of practical reason, a lot like etiquette. You might have good reasons for wanting to know whether morality gives one a reason for doing this or that, in just the same way that you would want to know what etiquette tells you to do. This could be interesting regardless of whether or not you believed morality gave one reasons for actions in the all-things-considered sense. It could help you predict the behavior of others, talk meaningfully with them about the practice etc. Of course, whether morality (understood as a social practice like etiquette) does give you reasons in the all things considered sense is an interesting and important question, and hence it's useful to have a concept of all-things-considered reasons in addition to moral reasons (and reasons of etiquette, reasons of baseball, etc. I don't see any reason to be parsimonious in the subscripting of reasons).

I suspect the distinction will be more appealing to those of us with externalist leanings. And, I suspect, it's probably also more appealing to folks who reject a robustly naturalistic metaethics, but I'm less sure on that one.

Maybe I'm just sitting on a limb by myself, but I think all reasons to act are moral reasons. So, I do think that the moral/non-moral distinction, at least in an important sense, is empty.

This is a very interesting set of issues. I can't wait to see what the regular contributors have to say about it.

Here are two quick questions:

1. Let's say that I have to choose between helping my brother cram for a final and going away on a planned rock-climbing trip. Helping my brother would be the (morally) best thing to do, but you give me reasons that show that I would be within my right to go rock-climbing. Are your reasons moral ones?

2. I have $100. There are two morally optimal things that I could do with that money: give it to charity A or give it to charity B. As it happens, giving it to charity B would also better serve my own interests, and this breaks the tie. Would this tie-breaking reason then qualify as a moral one?

Rather than specify which moral theory your account of morality is based on (which it strikes me each of your 3 options seem to) couldn't we just say that moral reasons are those derived from ethical theory and that non-moral reasons are not?

I don't think we need consensus on the content of the distinction before it can be useful.

I think I'm along the lines of Jonathan Dancy here. He writes that 'Consider here the sad fact that nobody knows how to distinguish moral from other reasons; every attempt has failed' but still relies 'on the reader's intuitive grasp of this distinction'.

This seems fair. When people say to us why they did such and such we do have an intuitive way of classifying their reasons to moral and non-moral ones. Of some of them we are very certain, of some we are uncertain and disagreeing, and of some we are certain that they are not. The reasons that belong to the moral class can be a mixed bag. I don't think there has to be one necessary and sufficient condition that can be theoretically revealed. But, we can give interesting accounts about what features many of the reasons share.

The situation seems to be similar to the Quine vs. Strawson and Grice debate about analycity. Quine claimed that there is no way to distinguish analytic truths from synthetic ones non-circularly and that there is no way in which how to tell of some cases whether the truth is based on synonymity or not. His conclusion was that therefore there are no analytic truths. The Strawson and Grice reply seems convincing. They claimed that the distinction still seems informative and even if we cannot tell of many instances to which class the truths belong, we *can* tell of many other ones that they certainly belong to the analytic class. Indeterminacy of some cases just does not imply that there is no useful distinction.

I'm not sure we should think that there is only one main question in ethics and we should all pursue solving it. There are very many interesting ones. And, metaethical questions are and should be complicated. Morality is complicated as is its relation to rest of our practical life.


Suppose we accept a theory like Ted Sider's Self/Other Utilitarianism (SOU). On SOU, an act is morally permissible iff it either (1) maximizes overall utility or (2) maximizes utility for others. Thus, on SOU, the fact that some act would promote the utility of the agent is not the sort of fact that can render it morally required. By contrast, the fact that some act would promote the utility of others is the sort of fact that can render it morally required. Now, as I draw the moral/non-moral reason distinction, the reason an agent has to promote his own utility is, on SOU, a non-moral reason and the reason an agent has to promote the utility of others is, on SOU, a moral reason. Very roughly, as I draw the distinction, moral reasons are capable of generating a moral requirement, whereas non-moral reasons are capable of generating some sort of practical requirement/ought (e.g., prudential) but not a moral requirement/ought. This is pretty rough. But I give a more careful explication of the distinction in my paper "Moral Reasons, Overridingness, and Supererogation." Do you see an problem with this way of drawing the distinction?

This way of drawing the distinction is, I believe, useful in accounting for supererogation and agent-centered options.

Hi, Heath.

If you're like me, you think that the main question of ethics is not just how to live (or how we ought to live), but what explains why that is so. And if you're like me, you think that different kinds of factors contribute to answering this question - what you ought to do is a result of putting a lot of different factors together. So if you're like me, you'll be concerned about differences in how these contributions to what we ought to do work, because those differences will result in differences in the explanatory theory of why we ought to do what we ought to do.

To take an example that doesn't seem very controversial, if it makes sense to distinguish prudential factors which contribute to what you ought to do from non-prudential factors, that may be for a variety of reasons. It may be because prudential and non-prudential factors contribute differently to what you ought to do. Or it may be that they are explained in different ways. Whatever the reason, even before we have a theory about how to draw the line between prudential and non-prudential factors, we may have much cause to be interested in this distinction.

The same goes, I think, for morality. The distinction between moral and non-moral reasons for action is a distinction among factors contributing to what we ought to do. This distinction may be important because different kinds of factors contribute differently to what we ought to do, or because they are explained in different kinds of way. For example, some people think that non-moral reasons can make actions permissible, but can't make them obligatory. Ideas about the different significance of moral vs. non-moral reasons can even lead to proposals about what that difference consists in - perhaps, if such a view were right, moral reasons just are the kind of reason which ceteris paribus make actions obligatory.

On my own view, different kinds of reasons are differentiated by their source. Professional reasons are reasons you have because of your profession, legal reasons are reasons you have because of the law, and moral reasons are reasons that you have because of a certain kind of explanation. To be more specific, my rough view is that moral reasons are reasons that you have simply because you are the kind of thing which acts - or reasons which are derivative from such reasons. It follows from this characterization that the basic moral reasons are reasons for any agent, no matter what she is like - and that's a worthwhile thing to distinguish, if we're trying to develop an explanatory theory which explains, for arbitrary agents, what they ought to do. What an arbitrary agent ought to do will be a product of the reasons that apply to any agent, together with the reasons that apply to this particular one. In my view, another reason why it is an important distinction is that such reasons - ones that apply to anyone no matter what she is like - are predicted by my account of the weight of reasons to be in general weightier than idiosyncratic reasons - reasons that are reasons for only some people. And that seems important to understand in order to fully explain what people ought to do.

This is a vexing topic. Still, the distinction between moral and non-moral reasons seems clear to me provided that you appeal to it in the context of a moral theory. Presumably, this moral theory will generate reasons for action, belief, etc., and these will be moral reasons. Any reasons not generated by this theory will be non-moral reasons.

Of course, if you define a moral theory as a theory that attempts to explain how we ought to live, it is not clear that such a theory would leave room for non-moral reasons.

(i) A reason for action you would have if you treated everyone equally, or took everyone’s interests equally into account. As many others have pointed out, this appears profoundly immoral: if I don’t treat my children or wife preferentially to yours, in situations where I have to choose, there’s something wrong with me.

I can't think of a single plausible moral theory on which everyone ought to be treated equally. To make the distinction between moral and non-moral reasons it's best to abstract from what constitutes a morally relevant property (or relation). Suppose what morality requires is that we treat people impartially (rather than equally) where S treats X impartially iff. S does not treat X in ways that favor or disfavor X on the basis of some morally irrelevant properties (or relations) of X. This of course is consistent with treating people radically unequally. For instance, I might give a full dose of morphine to Y rather than X, given an important and morally relevant property of Y (say, she's suffering much more). I might favor my wife over your friend because I stand in a morally important relation to my wife, say, the relation "spouse of". If I favor or disfavor X on the basis of properties that are not morally relevant, then I am favoring or disfavoring X for non-moral reasons. If I favor or disfavor X on the basis of properties that are morally relevant, then I am favoring or disfavoring X for moral reasons. Maybe that makes the distinction between moral and non-moral reasons clearer.
Of course, what counts as morally relevant property or relation of X is a matter for moral theorists to determine; but it's not the question here. So we can disagree about whether, in any particular case, our reasons are moral ones or not. But we can agree about what constitutes a moral reason. An example. I favor myself in the distribution of some scarce good and my reasons are that my interests matter more morally than the interests of others. If it happens that my interests do indeed matter more, then I favor myself for moral reasons. If it happens that my interests do not matter more, then my reasons are non-moral.

Heath, like you I am attracted to the broadly Aristotelian way of thinking about these things, so I am inclined to be suspicious of the way the line is usually drawn as well. However, there is a distinction in the neighborhood that I've been tempted to think isn't implausible as a way of tracking such a distinction — though it wouldn't line up with most other ways of drawing it.

It does seem to be a matter of common sense that, while we have reasons to take into account the goods of lots of others for lots of reasons, there are special cases in which we are accountable to them for doing so — rights claims or the like. Darwall's recent work on second-personal reasons tracks this distinction. On this way of marking it off, moral reasons are reasons for action for which others may hold us accountable to them, second-personally, as a matter of what we owe them. (This category of reason is missing from the ancient accounts, but it's needed for any moral view to be plausible, I think.) Other reasons — including reasons for (say) my own perfection, or reasons for beneficence, which might otherwise be counted as moral — don't on this way of dividing things up count as moral.

That's an idiosyncratic way of doing things, and maybe it would be clearer just to drop the attempt to vindicate the distinction, since it comes out so different this way than it does on Doug's view, or Mark's (though you can also see this as a species of categorization of the sort Mark suggests), or others. Whether it is ultimately worth marking in this way I am not settled on myself. I am certainly not inclined to trust the moral/non-moral distinction to bear much weight.

It seems to me that when we use the word 'ought' we can substitute either a reason for doing x if we are not doing x or a description of x when x is what we should be doing when we are already doing it.

1) The use of the word 'ought' in the sentence 'A ought to do x' implies that A is not doing x. In these situations it serves the purpose of making a recommendation concerning how one should act but in itself it has no motivational impact. The motivation to act when one is not doing x is the reason that supports doing x. For example, 'I ought to go to the doctor' only makes sense if I am sick and need to go to the doctor and will be worse off if I do not. My actual reason for acting is not that I ought to, but that I do not want to be worse off.relative to Heath's point of how to live, the 'ought' does not play any role to play other then indicating that there are reasons for not living the way one is living. In this situation where I have a reason to go to the doctor I could say either, "I ought to go to the doctor," or "I am going to go to the doctor." both would equally serve the purpose of indicating that I have a reason that supports me going to the doctor.
2) If one is living as one ought to, the 'ought' again has no motivational force. In this case it is simply shorthand for a description of how one is living. When I say that I am living as I ought to live, a person can ask me how I am living that makes me say that it is what I ought to be doing. I then have to give a description of how I am living my life that explains that what I am doing is what I ought to be doing.
3) I can always ask the question, "ought I to do x," but this question does not have motivational force. It is simply a request for a reason that would justify me doing x, or not doing x.

I agree with Christian that all reasons are moral reason because it seems that we can always ask if we should be doing what we are in fact doing. I think it was Brand Blanchard who once wrote that the language we are using indicates whether we are in the moral sphere or not. If we can use the world 'should' then we are in the moral sphere.

I think the reference to Dancy above is helpful here. Dancy, recall, is a particularist about reasons, so he doesn't believe in a priori claims about what (kind of) factors will be reason-giving, whether they'll be for or against a given course of action, or what normative force they'll have outside a given context. It seems that many want the moral/non-moral distinction to do the work addressing the last issue of normative force - i.e. moral reasons carry greater normative force and therefore trump non-moral reasons. If you're one like Dancy, or Aristotle for that matter, then you'll seemingly have no use for the moral/non-moral distinction, I think, insofar as there can be no substantive class of considerations that a priori have greater normative force as reasons than others.

Not sure what view I'd endorse on this issue, but a kind of nominalism or functionalism about 'moral reasons' might work. Moral reasons are those diverse considerations that are used in moral reasoning (that of course backs the question up, to what is 'moral' reasoning?) Or perhaps there's a Scanlon-type move available here: Moral reasons are less a substantive kind of reason than reasons that are non-rejectable in a particular context of deliberation.


Many thanks for your comments. The view I am coming to as I read through them is that there are several interesting distinctions in the neighborhood, no one of which lines up neatly with what is usually considered “morality.” I wonder if “morality” is not a mishmash of several other categories.

Several people have proposed ways of distinguishing non/moral reasons that look circular to me. For example, David H and Mike V suggest that moral reasons are those derived from a moral theory. Doug appeals to reasons that generate a moral requirement. Mike appeals to morally relevant properties. Michael Cholbi (who sees the circularity) appeals to moral reasoning. Surely, unless we can distinguish the moral theories/requirements/properties/reasoning from the non-moral kind, we have made no advance here. Are there ways to do this that I’m missing?

Then there are several other proposals that strike me as important distinctions, though not necessarily the non/moral distinction. Sean says that moral reasons are the ones with priority; well, maybe—it’s a good stab, and if there are reasons with priority this would be a good thing to know. Some people have thought that moral reasons weren’t always overriding, however. Mark S suggests that moral reasons are the ones we have simply in virtue of being an agent. I think we probably do have such reasons. But consider, e.g. the prohibition on adultery. This seems like a moral prohibition, but one that only makes sense in the context of certain contingent features of human emotional makeup and child-rearing necessities. Mike LeBar suggests that moral reasons are the ones for which others may hold us accountable. As he recognizes, this doesn’t track too well with ordinary morality. But I agree it’s an important category.

I thought Doug’s point about accounting for supererogation was a good one—I’m not sure what to say about that. Matt Z’s externalist perspective makes sense, but of course he’s going to have to figure out how he’s going to live, so whatever he comes up with will be an ethic of some kind, according to me. Matt McAdam understands where I’m coming from – the non/moral distinction makes most sense in the context of a certain view of the structure of reasons for action (of which I’m somewhat doubtful). And I think he does a good job of replying to Jussi on my behalf.


I'm aware of the circularity, as I think many of the other commentators are. I don't see that the circularity is problematic, though. You're right to want to know how to distinguish moral requirement from, say, prudential requirements given that I want to distinguish different types of reasons depending on what kinds of practical requirements they are capable of generating. And I think that we can all give you some answer, but, of course, the answer will be controversial, for we will probably disagree about what the correct substantive theories of morality and prudence are. So we can distinguish the two types of requirements given certain assumptions about the substantive nature of morality and prudence. For instance, if we think that SOU represents the correct moral theory and that egoism represents the correct theory of prudence, then the fact that some act would benefit others is a moral reason and the fact that some act would benefit the agent is a prudential (non-moral) reason. Now you will reject this particular substantive account of the moral/non-moral reason distinction (which is your (ii)), but that's only because you reject SOU as the correct substantive account of morality. So, perhaps, you should say more specifically what you are looking for? Are you looking for an account of the moral/non-moral reason distinction or substantive accounts of both moral reasons non-moral reasons. Your suggestions (i) and (ii) seem to be the latter, whereas the way you start off the post (noting, for instance, that some philosophers suggest that moral reasons are overriding) suggests that you're interested in the former. Many of us have given you accounts of the former that won't give you the latter until we have substantive accounts of morality and other non-moral practical realms like prudence. But I don't see why you object to those accounts. You say they're circular, but so what? Why is that a problem unless what you're looking for is a substantive account of what moral reasons there are. And if that's what you want to know, then tell me what the correct moral theory is and I'll get back to you with the correct substantive accounts of moral reasons and non-moral reasons.

Heath (if I may) -

Thanks for a very interesting post, though I share Doug's worry. It seems to me that the argument that you provide above only gets off the ground if we're uncertain of what the true account of morality and of moral reasons is. I have my suspicions about the answer to this question, but that would take awhile here. But I wonder if you might not be sympathetic to the following argument, which seems to me to gesture at something in the ballpark of your conclusion:

(Just to be clear, I use "reason" here to mean something like "that which is normative for some agent", rather than "a true account of an obligation under some system of norms". One can have moral obligations or moral "reasons" in the latter sense without them being normative in the former sense. I take it that the former sense is the important one here.)

1. Either morality always provides reasons or it doesn't.
2. If it does always provide reasons, it is not worthwhile to distinguish moral reasons from other sorts of reasons--we can just talk about the reasons we have, and this will encompass so-called "moral" reasons and so-called "non-moral" reasons. So if morality always provides reasons, we can drop the talk about morality.
3. If morality does not always provide reasons, it is not worthwhile to talk about morality per se. Why discuss a system of norms that does not provide reasons for action? Simply talking about "reasons" without referring to "moral" and "non-moral" reasons will encompass the moral reasons when morality does provide reasons. But talking about morality per se rather than "reasons" is spilling impotent ink.
4. Either way, talk about morality is a red herring. Best to just talk about "reasons". (This might add some content to the claim that the main question in ethics concerns how we should live.)

First: is this congenial?

Second: If it isn't congenial, perhaps these comments go nowhere, but I have worries about the above argument. In fact, it seems to me that (3) is false. One reason might be that even if morality does not provide reasons for action, we still hold it in high regard, i.e., we might praise persons for acting irrationally in a way that leans toward morality. Just judging from the title of the paper, this might be something like what Doug intends (this might be evidence that I should read Doug's paper). In other words, if people act irrationally, though morally, we might claim that they acted in a supererogatory way.

But I think there is a further reason why we might be interested in morality even if it doesn't provide reasons. Let's say we're strong reasons-internalists. In other words, let's say that a purported reason for x to $ is a genuine reason for x to $ if and only if $ appears in x's "subjective motivational set" (perhaps properly tweaked, e.g., with full information). (I contrast strong r-i with the r-i endorsed by, e.g., Korsgaard, which is substantially weaker.) It might be, then, that we don't believe that morality always provides reasons because the coincidence between morality and subjective motivation is imperfect. But on this view, discussing morality is still useful and important. Agents' subjective motivational sets are subject to cultivation, moral education, and other sorts of influencing factors. We might believe, then, that morality is a good guide for educating our children, or using to influence others' motivations in order to give them reasons to act morally. One might read Mill's discussion in Chapter 3 of Utilitarianism in this way. In other words, even if morality does not provide reasons, we can still hope to make it that that which is rationally authoritative for people better conforms to morality.

Perhaps what this establishes is that even if there is no good reason to distinguish moral from non-moral reasons this does not amount to anything like a "doubt about morality." We have good reason to try to understand morality even if morality is not always rationally authoritative.

Doug writes that

    moral reasons are capable of generating a moral requirement, whereas non-moral reasons are capable of generating some sort of practical requirement/ought (e.g., prudential) but not a moral requirement/ought.

Though this is fine as a criterion of moral reasons – in exactly the same way that "featherless biped" is a fine criterion of being a normal human being – I wonder if this is an account of what it is, in virtue of which, a reason counts as moral. That is what I thought Heath was asking about. Doesn't it seem plausible that we could give some answer, here, without having a substantive theory of morality?

In addition, I am not entirely persuaded by the objection Heath raises against the "impariality" account of morality.

Most of us would probably agree that there are cases in which one just-plain-ought to favor one's wife, child, or friend over a stranger. But can we be certain that the operative reason in this case is a moral one? Maybe this is merely a case of personal reasons trumping moral ones. This seems like a possible move if one believes, as I do, that no single category of reasons is invariably overriding.

Perhaps another view worth mentioning is broadly Hobbesian. Moral reasons are those which arise out of certain contingent relations between people (and other beings). We each have our own ends, or interests. Given the nature of the world we inhabit (scarcity of resources etc.), these are not jointly satisfiable; we can't all get what we want, at least not all of what we want. This brings us into conflict and makes us vulnerable to each other. Moral reasons are those which bear on the resolution of such conflicts.

The obvious test-case for this view is to imagine a world without conflict. Would there be moral reasons in such a world?

(Also, Heath, I believe you have the power to edit Matt Zwolinski's earlier comment. I suggest you use this to turn off the italics.)

As far as I can tell the only example given of a purported non-moral reason for acting is the following:

"And it seems that certain reasons I might be given by the good of others aren’t thereby moral reasons: I could give you a dollar, but why think that’s moral?"

If I give someone a dollar, I'm refraining (or omitting) from giving that same dollar to charity. Refraining from giving to charity is a moral action (or inaction), if anything is. Refraining from giving to charity involves refraining from making someone better off, and even a dollar, for some people, would make them much better off.

So, we have as yet, no reason to think there's at least one reason that's not a moral reason. I'm curious if there are other purported candidates. I can't think of one. If this is correct, then there is no distinction between a reason that is moral and one that is not.


not all reasons for moral actions or for not doing moral actions are thereby moral reasons. As Mark suggested above, it's better to look at the sources of the reasons than the actions. And surely there are non-moral reasons - aesthetic reasons, hedonistic reasons, prudential reasons, legal reasos; take your pick. I, in fact, just have a non-moral reasons to go to toilet, take a shower, eat breakfast, brush my teeth, and so on.


What I'm suggesting is that prudential reasons and the like are not a species of non-moral reasons, but rather, they are a species of moral reasons. All reasons to act are moral reasons.

You give purported counterexamples: you have a reason to go to the toilet.

I suggest this is a moral reason or derived from a moral reason. You have a reason to go to the toilet because doing so is what you want, it will make you better off. Those are moral reasons, just like sharing your toilet with me makes me better off, and that is moral reason.

The same kind of treatment can be applied to all of your examples.


The reason I think that defining “moral reason” in terms of (say) “moral theory” is viciously circular is that now my problem becomes, what is the distinction between moral and non-moral (not reasons but) theories? For example, I think I know what makes egoism a prudential theory: it’s a theory about my self-interest. Such theories have to specify what my self-interest consists in, future discount rates, etc. But I don’t need to know which is the correct theory of prudence to know whether I’m looking at a theory of prudence.

On the other hand, stipulate that SOU is a correct theory of some kind. But what makes it a moral theory? I would be surprised if the answer were, “Because it’s a theory of other-regarding obligations.” Because then it looks like Kant never came up with a moral theory at all, since his theory included self-regarding obligations. And even if there’s something wrong with that example, it seems to me I should not have to know what the correct moral theory is before I can tell whether a given theory is a moral theory.

Another way to put this whole line of thought: suppose I was quite sure about what all my reasons were, derived from a comprehensive theory integrating several sub-theories. Mightn’t I still be puzzled about which of my reasons were moral reasons, and which sub-theories were moral theories? How should someone explain this to me?


Your argument is indeed congenial, though I didn’t think of it in exactly that form. The conclusion is precisely what I meant by the main question of ethics being how to live.

Your two suggestions for uses for morality do give me something to chew on. They have in common the idea that we might wish people’s reasons were different than they are, and whatever that ideal set of reasons is, we would like a name for it. I can see the need for this kind of thing and I hadn’t considered it before, so thanks.


Your Hobbesian suggestion had occurred to me too, and there is definitely a category of reasons concerned with mitigating interpersonal conflicts. Are those moral reasons? I’m not sure. A world without scarcity would still be one in which murder, adultery, and rape were morally wrong, I think. A world without conflict—that’s harder. Reasons for self-cultivation seem to be the tough case.


I’m with Jussi. To begin with, doing what you want, and making yourself better off, are not the same thing. Which is the moral thing to do? Are there still reasons to do the other, when they conflict?


"To begin with, doing what you want, and making yourself better off, are not the same thing."

Doesn't getting what you want make you better off, all else equal? That is, having a desire satisfied makes one better off. I agree, though, that being better off and getting what one wants are not the same thing. I'm suggesting that acting so as to make oneself better off or acting so as to get what one wants, though not the same, are both moral reasons to act.

"Are there still reasons to do the other, when they conflict?"

I'm not sure I follow. Moral reasons can conflict and when they do we have moral theories that advise us on how to resolve those conflicts. For example, Consequentialism tells us to maximize value, that is, to act from those reasons to maximize value.

I'm missing the problem with supposing that all reasons are moral reasons.


I see now. So you don't have any specific doubts about the distinction between moral and non-moral reasons. Your worry is about distinguishing the moral from the non-moral generally, whether the distinction be applied to reasons, theories, values, or whatever else. Is that right?


Yes, the main question has to do with the distinction between moral and non-moral. I picked 'reasons' in the original post because that seems to be the go-to normative concept these days, but the problem is general.


The problem with just supposing it is that we would like to hear some support for the claim, some kind of an argument. Many of us find the claim very unintuitive.

I at least think of reasons as something like considerations that count in favour of actions. Some of these considerations seem utterly non-moral in nature, like the fact that the nature calls or the fact that I feel like whistling. Others seem paradigmatically moral considerations, like the fact that a stranger is in a dire need or that I have promised to do something.

So, there seems to be an intuitive difference that seems worth saving. Now, you want to make the landscape flat. I guess I would like to see an argument that would give some (moral?) reason to believe that all reasons are moral.


One can give an argument for a claim or respond to arguments against a claim. Above, I resonded to arguments against the claim that some reasons for action are nonmoral reasons. I did this by responding to purported counterexamples. That was all I aimed to do.

Perhaps you're suggesting it's simply intuitive that there are some nonmoral reasons for action, but I do not share this intuition, so I don't feel any burden to offer an argument for the claim that all reasons are moral reasons. Nonetheless...

If a reason for acting is something that counts in favor of acting, then I take it that the "counting in favor" expresses a moral concept.

I also don't understand the idea that some actions we perform are somehow shielded from moral evaluation and that our reasons for performing these action are not evaluable as being the kinds of reasons one ought to be acting from. Our lives, our characters and our reasons are all within the moral sphere, which is to say, nothing is outside of the moral sphere.

Is that an argument? Not really. How about this: all reasons for action involve desires or beliefs directed at value. As such, all reasons are directed at value. Anything directed at value is a moral reason. So, all reasons are moral reasons.

Excellent post. It's quite embarrassing that we self-identified moral philosophers don't have a consensus answer to this question.

I am tempted to think that the claim that the word "moral" in locutions like "moral reason" has only a mesmeric force is spot on.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.