Normative theories are occasionally criticized for being esoteric. A theory T is esoteric iff (T is true (or correct, or superior to its rivals, etc.), but it is better that T not be generally believed or accepted.) Examples of allegedly esoteric philosophical theories include:
• "Sophisticated" utilitarianisms: These theories distinguish between utilitarianism as a decision procedure and utilitarianism as a criterion of rightness, and might argue that utility is better maximized if some people people neither use the utilitarian standard as a decision procedure nor even accept or endorse that standard. If the theory were not esoteric, it may well be self-defeating.
• Political anarchism: Even though it is false that we have any obligation to obey the state as such (apart from the justness of its laws and directives), it is better that people in general believe they have such an obligation.
• Ethical egoism: Even though what we ought to do is always to pursue our self-interest, egoists would prefer not know or believe this.
One initial observation: "It is better that T not be generally believed or accepted" can be cashed out different ways. Sometimes esoteric theories are defended on terms internal to the theory, i.e., that the theory is better implemented or realized if it remains esoteric (such as in sophisticated utilitarianism and ethical egoism). But other times, the esotericism is defended by appeal to some other normative concern; in the case of political anarchism, it is better that people not accept or believe the theory not because the theory would be better realized if we did not believe it, but because of an independent normative worry, such as fear of social disorder.
I'm curious whether esotericism is a fair criticism of a normative theory — whether there is a single criticism here, and what force such criticisms may have. Initially, I can think of three reasons to suppose that esotericism is fair criticism:
• Ethics of belief: Belief should strive to be true. An esoteric theory denies this, and so should be rejected.
• Stability: Esoteric theories do not result in stability. The best
way to esnure that a theory is realized in practice is for people to
believe and accept it. We get no benefits from the true theory if few
people believe it. (I gather this is the force of a lot of
contractualist and Rawlsian thinking about publicity, which I gather is
the denial that estoeric theories are acceptable.)
• Immorality: It is immoral — or at least morally wrongheaded — to
endorse esotericism in a theory. Genuinely moral agents do not simply
act in accordance with, or in ways that promote, the normative ideals
proposed by a given theory. They must instead act in accordance with
the theory because they realize the theory is true and gives them
reasons to act. Virtue theorists, Kantians, and others with
agent-centered conceptions of morality might find this line of thought
attractive.
In any case, I'd be curious to know if anyone has thought carefully about these issues.
Mike,
Isn't the problem with esotericism rather that it conflicts with the usefulness of a moral theory? Under the assumption that, ideally, practical principles are useable, we should prefer practical principles that are, inter alia, (1) determinate, (2) exoteric and (3) not overly demanding. Principles that do not satisfy these criteria are likely not to be especially useful.
Posted by: Mike Almeida | October 13, 2006 at 06:09 PM
I've never studied this in any depth, but I have for a long time failed to understand this kind of criticism.
Is Mike's suggestion of "usefulness" really what Michael is getting at when he talks of "stability"?
I think Mike may be close to what people intend when they criticise theories in this way when he talks of usefulness, but it looks to me to be a completely wrongheaded criticism. Sophisticated utilitarians, for instance, will certainly maintain that their theory is very useful, given that it can tell us what motivational set we should adopt. How are these theories meant to be non-useful if they tell us which motivational sets to adopt?
Some briefs thoughts on the other two worries that Michael raises:
In terms of "Ethics of Belief", this must be a claim with a ceteris paribus clause. Sure, beliefs should aim at truth, but that's not to say that we shouldn't sometimes believe falsehoods when there are other stronger reasons to do so. Can such a weak ceteris paribus be a deciding factor against, say, sophisticated utilitarianism, given the huge number of other arguments that might be employed both for and against it?
I'm not sure I understand your last idea. "Immorality" seems like a strange way to phrase it, for the obvious reason that we're here assessing normative theories. To assess them by bringing in some additional dimension of normativity seems like it might generate a regress. Or is the idea that correct normative theories ought to prescribe their own non-esotericity? (is that a word?!) That sounds question-begging.
Thanks for the interesting post.
Posted by: Alex Gregory | October 14, 2006 at 04:17 AM
Sophisticated utilitarians, for instance, will certainly maintain that their theory is very useful, given that it can tell us what motivational set we should adopt. How are these theories meant to be non-useful if they tell us which motivational sets to adopt?
If a moral theory is esoteric then it is better, typically on the theory's own terms, that the theory is not taught or made widely known. If utilitarianism is esoteric, for instance, then the maximization of utility is better achieved if the principle of utility is not widely known. The criticism advanced is that a moral theory that remains unknown (or known to just a few) won't be especially useful. And it is not a good feature of practical principles that they are not useful. I certainly don't claim to have originated the criticism. For what the criticism is worth, it is well-known.
Posted by: Mike Almeida | October 14, 2006 at 09:37 AM
Mike C. writes:
I take it that the 'should' here is the moral should. I also take it that it is people, not beliefs, that should strive for truth. And, of course, a sophisticated utilitarian doesn't deny that you should often pursue truth, only that you should always pursue truth. But now whether people should always act in ways that make it more likely that their beliefs will be true is clearly a question for normative ethical theory. And the claim "people should always act in ways that make it more likely that their beliefs" is true if and only if utilitarianism is false. Given this, I don't see how you can use this claim as evidence for utilitarianism's being false. Your ethics of belief just seems to beg the question against the sophisticated utilitarian.
Posted by: Doug Portmore | October 14, 2006 at 09:47 AM
And the claim "people should always act in ways that make it more likely that their beliefs" is true if and only if utilitarianism is false. Given this, I don't see how you can use this claim as evidence for utilitarianism's being false.
I can't follow that, Doug. The objection from the ethics of belief is based on epistemic evidentialism. The idea is that you should believe only those propositions for which your evidence is on balance positive. You need not believe those propositions that are such that belief in them on balance pays. Utilitarianism requires the acquisitions of those beliefs that are utility-maximizating. It does not require (and sometimes prohibits) the acquisition of beliefs evidence for which is on balance positive. These criteria pull apart in many cases. Pascal's Wager is one. But there are lots of others. It might be utility-maximizing for me to believe that I will survive surgery, even if the evidence is strongly against it. Evidentialists argue that I should not (i.e., should not, as a good epistemic agent) believe I will survive.
Posted by: Mike Almeida | October 14, 2006 at 11:13 AM
I think there is an assumption of ‘complete publicity’ hidden in this discussion. We should first ask whether complete publicity or complete transparency is a necessary condition of a true/acceptable/correct moral theory.
Complete publicity: moral theory M possesses complete publicity only if every agent has the capacity or ability to use M as a decision procedure.
Complete publicity is an assumption that seems unproven. There are at least some reasons for not believing the assumption. Using any M as a decision procedure requires considerable factual knowledge, sensitivity to many values and interests, and imaginative awareness of effects of decisions on other people. Whatever non-moral capacities, abilities, and skills are required for an agent to use M as a decision-procedure vary widely throughout the human population. If S lacks certain sorts of experiences, biological dispositions, or educative opportunities that would develop S’s non-moral capacities, S might not possess the non-moral characteristics required to use M as a decision-procedure. S need not be ‘abnormal,’ ‘depraved,’ or ‘psychopathic’ for this to be true of S.
It seems that for any M and any population, some proportion of the population would lack the non-moral characteristics required to use M as a decision-procedure.
I think complete publicity is an ideal condition, so I am inclined to think that being esoteric itself is not good reason to reject a moral theory until we have clarified the publicity requirement on moral theories.
Posted by: robert | October 14, 2006 at 11:25 AM
Mike A.
I meant to write: "the claim 'people morally ought always act in ways that make it more likely that their beliefs are true' is true if and only if utilitarianism is false."
If Mike C. is claiming only that people ought, epistemically speaking, believe what is true, then I don't see that esoteric moral theories deny this. After all, moral theories make no claims about epistemic oughts or reasons.
So I guess it's a dilemma: On the one hand, if "Belief should strive to be true" is interpreted as merely a claim about what we should do epistemically speaking, then Mike C. is wrong to claim that esoteric moral theories deny this. On the other hand, if "Belief should strive to be true" is interpreted as a claim about how people morally ought to act, then it does conflict with, say, utilitarianism, but it just begs the question to say that this conflict counts as a reason to reject utilitarianism.
Posted by: Doug Portmore | October 14, 2006 at 11:54 AM
Doug,
I don't think the objection is question begging. There is independent reason to hold that we should believe only those propositions for which the evidence is on balance positive. The reason is that such beliefs are more likely true and, certainly prima facie, truth is the aim of belief. Utilitarianism entails that truth is not the aim of belief (nor a norm of assertion), rather utility is. Since moral goals trump epistemic goals, the conflict resolves. We are left with a utility-maximizing epistemology. That seems, again prima facie, mistaken.
Incidentally, D.H. Hodgson made this argument long ago in Consequences of Utilitarianism. D. K. Lewis has an interesting reply to Hodgson in 'Utilitarianism and Truthfulness' in his Phil. Papers II.
Posted by: Mike Almeida | October 14, 2006 at 12:45 PM
I thought Doug's point was that "beliefs aim at truth" means something like there's an epistemic obligation to believe truths. Not a moral obligation. Utilitarianism entails that it's morally wrong to believe truths sometimes, but says nothing about whether it's epistemically obligatory. Maybe some utilitarians think that moral obligations trump epistemic ones: if you're e-obligated to do a1 and m-obligated to do a2, and a1 and a2 are alternatives, you should all-things-considered do a2. If they think that, it's not because they are utilitarians. It's because they think moral obligations trump epistemic ones. You could think that if you were a Kantian too. And you could be a utilitarian and deny it. I don't see any problem for utilitarianism here at all.
Posted by: Ben Bradley | October 14, 2006 at 01:21 PM
I meant to write: "the claim 'people morally ought always act in ways that make it more likely that their beliefs are true' is true if and only if utilitarianism is false."
I think you really meant to write 'only if', not 'if and only if'. (Just a small point.)
Posted by: Campbell | October 14, 2006 at 01:23 PM
Mike A.,
You write:
As I understand the view, act-utilitarianism holds:
AU: S's act of X-ing is morally permissible if and only if there is no alternative available to S that would produce more utility than X would.
Can you explain to me how AU entails that truth is not the aim of belief? As I see things, AU says something about acts and what determines their moral statuses, but it says nothing about the norms of assertion or about belief and what it's aim is supposed to be.
AU implies that acting so as to cause oneself to believe that P (whether P be true or false) is morally permissible if and only if so acting maximizes utility. But if Mike C. is giving us an account of the ethics of belief such that acting so as to cause oneself to believe that P is morally permissible if and only if there is sufficient evidence that P is true, then it seems to me that he is just begging the question against AU.
Posted by: Doug Portmore | October 14, 2006 at 01:53 PM
Campbell,
Right. Thanks for the correction.
Posted by: Doug Portmore | October 14, 2006 at 01:53 PM
I don't see any problem for utilitarianism here at all.
I guess I'm less certain there's no problem here or in the vicinity. Suppose you are an ideal utilitarian, and say to me "I pushed button B". I have no reason to believe that you are aiming at expressing the truth and every reason to believe that you are aiming at expressing something utility-maximizing. In fact, I couldn't determine whether you are even instrumentally aiming at expressing a truth unless I know what you believe about what I would do upon the expression of the truth. But I don't know what you believe about what I would do and, for obvious reasons, it wouldn't help to ask you. The problem would begin all over again. Do Kantians have such a worry? Do social contract theorists?
Posted by: Mike Almeida | October 14, 2006 at 02:14 PM
Thanks everyone. As usual, it looks like I set off a firecracker and scurried away. My own post was motivated by genuine curiosity: I'm troubled by charges of esotericism, but am not confident about their force or nature. So I'm trying to figure out if I have a dog in this race or not.
Doug, Mike A., Ben, et al.: The ethics of belief point suggests (to me at least) a conflict between epistemic and moral oughts. (Ben is correct, I think, about Kantians also allowing that moral considerations supersede epistemic ones. Kant's moral arguments for the afterlife and for belief in God are examples of this, though in Kant's metaphysics, there is not evidence *against* these conclusions that would entail that accepting them violates a weak form of evidentialism.) That is to say, Mike A. is right and Doug is right: Utilitarianism (sophisticated or naive) by itself entails nothing about the ethics of belief, but it may violate evidentialism. The issue here is whether our normative committments form (or ought to form) a unified set, comprising both the ethical and the epistemic.
Robert - I appreciate the spirit and direction of your reply. Though I wonder if clarifying "the publicity requirement on moral theories" is a project we can undertake independent of any substantive moral commitments. I.e., I suspect that arguments on what *the* publicity requirement on moral theories is will turn on what moral theory we find attractive. Publicity doesn't seem like an a priori or theory-neutral theoretical constraint in the way that, say, the ability to make testable predictions is an a priori constraint on scientific theory.
Alex - on the "immorality" criticism. Well, as I mentioned above in my reply to Robert, this might suggest that talk of publicity vs. esotericism is not something that can avoid circularity or begging the question. At root, my suspicion is that we have a dispute about the place you think morality has in human existence: Is morality mainly a standard for appraising action, etc., or is it something that is supposed to guide our behavior and toward which we are supposed to be positively disposed?
Posted by: Michael Cholbi | October 14, 2006 at 02:26 PM
"As I see things, AU says something about acts and what determines their moral statuses, but it says nothing about the norms of assertion or about belief and what it's aim is supposed to be."
I take an assertion to be an act. I take the utilitarian goal for assertions to be utility-maximization. At best the goal of speaking truthfully (a typical norm of assertion) is instrumental. I'd say the same thing about belief. I take the cultivation of belief to be (or to include) voluntary action. These actions aim at utility maximization. So I should cultivate beliefs that are such that having them would maximize utilty. That's all I have in mind.
Posted by: Mike Almeida | October 14, 2006 at 02:27 PM
Michael:
your response reminded me of something Frankena writes at the end of "Obligation and motivation in recent moral philosophy" about the difficulty of determining whether externalism or internalism about ethics is more correct. "Such a determination," he says
calls for a very broad inquiry . . . about the nature and function of morality, of moral discourse, and of moral theory, and this requires not only small-scale analytical inquiries but also studies in the history of ethics and morality, in the relation of morality to society and of society to the individual, as well as in epistemology and in the psychology of human motivation."
A large task, indeed, The objection based on esotericism like the publicity requirement, is the result of many conclusions in many areas of thought. It is often linked to unexamined beleifs in other areas. So I think it is not a decisive objection by any means to say that a moral theory is esoteric.
Posted by: robert | October 14, 2006 at 03:09 PM
Mike A.,
Thanks. That's helpful. I'm not onboard with thinking there's any real problem for AU here, but I appreciate your point better.
Here's why I don't think there's a problem. Yes, asserting that P is an act. And you may have some intention/motive/goal in performing this act. But note that AU, by itself, doesn't entail any view about what your intentions, motives, or goals ought to be. Some utilitarians claim that the best intentions/motives/goals to have are the one's that dispose one to maximize utility. But one can accept AU without accepting this.
Posted by: Doug Portmore | October 14, 2006 at 03:11 PM
Doug, I agree that,
. . . AU, by itself, doesn't entail any view about what your intentions, motives, or goals ought to be.
What a utilitarian should do is maximize utility. How could that not be true? Maybe the best way to do that is not to aim directly at utility maximization. Maybe the best way to do that is to aim at conveying the truth. But certainly that is not in general true. Since I know that, I should not (except in cases where I know a lot more than I typically do about the utilitarian's beliefs) trust the information that a utilitarian conveys.
Posted by: Mike Almeida | October 14, 2006 at 04:57 PM
My problem with esoteric normative theories is mainly with their lack of success in what I take to be one of the main aims of normative theories. I'm often puzzled about what I ought to do, what would be wrong for me to do, and so on. Sometimes, at weak moments, I turn to normative ethical theories for guidance. Esoteric theories are the ones by definition that I cannot turn to. By the lights of these theories themselves, it would be wrong for me to do so or at least they guide me to think about something else. For this reason, the fate of this kind of theories is, in Williams's memorable words, to usher themselves from the scene. If the esoteric background theory points to some other considerations that I should use to guide my actions anyway, then when I am doing *normative ethics* it is those considerations I'm more interested in.
One word also about the ethics of belief objection. This is a tempting claim to make but I do think there is a reply from the esoteric perspective. The objection assumes that there is a difference between truth and warrant for having the relevant beliefs. The esotericists can deny this gap if they accept a deflationary or some other epistemic notion of truth. They can thus think that, yes, it is true that the aim at truth is constitutive of the belifs. But, they can say that for this class of beliefs to be true just is for them to be warranted. For this reason, whatever reasons the esoteric theory gives for the beliefs, these reasons are reasons for their truth. So, in terms of ethics of the beliefs things are kosher.
Posted by: Jussi Suikkanen | October 14, 2006 at 07:37 PM
Jussi,
You write,
You must be using a different definition of an esoteric theory than Mike C.'s. On Mike's definition, esoteric theories offer the true (or the correct, or at least the more plausible) account of what you morally ought to do. So they're very successful at telling you what you morally ought to do and what it would be right and wrong for you to do. They are also good at telling you that you would be more successful at doing what you ought to do if you didn't focus on what you ought to do but on something else instead.
Posted by: Doug Portmore | October 14, 2006 at 08:20 PM
Doug,
But Jussi's onto something, isn't he? An esoteric theory is one people aren't supposed to believe, ergo, are not supposed to use the theory to *guide* their actions. They provide as you put it "an account of what you morally ought to do," as you put it, but explicitly deny that the theory should be consulted in determining what you ought to do. In this respect, an esoteric theory provides us abstract or backward-looking criteria, but not prospective guidance for how to act.
Posted by: Michael Cholbi | October 14, 2006 at 08:31 PM
Thanks Michael! That's pretty much the point. Whatever answer these theories give to the answer the question what I ought to do, that answer is something I should not think about when I deliberate what I ought to do. They thus tell me nothing after all when I think about this question from my first person deliberative perspective as an ordinary moral agent. Whatever they give as an answer to that question in the abstracted theoretical perspective is of little interest to me in the practical life. I guess I am with Jackson that normative ethics is concerned with practical perspective and not the theoretical, abstract one.
Posted by: Jussi Suikkanen | October 14, 2006 at 08:49 PM
I wonder if there's two possible criticisms here, and it might be important which it is that we're worried about:
1) A theory X is esoteric iff it is better if no-one is aware of X at all.
2) A theory X is esoteric iff it is better if people do not motivate themselves according to X.
To some extent the difference is a matter of degree, but Jussi's worry seems to apply only to (1), and not to (2). Theories which fall into category (2) can tell us (as I stated earlier) which motivational set to adopt, and therefore do function, indirectly, as a guide to practical action. Theories in category (1), on the other hand, offer no practical guide to action given that we ought to ignore them entirely.
Here's another worry with the criticism: Isn't whether a theory ends up being esoteric or not a function of how intelligent those people using it are? And if so, doesn't that make the esotericism criticism reliant on a kind of relativism? Why should the truth of a theory be dependent on how able we are in using it?
And on the ethics of belief, isn't the critic facing the following dilemma:
Either, the criticism is that we morally ought to try to adopt true beliefs, or the criticism is that we epistemically ought to try to adopt true beliefs.
If it's the former, then it is, as Doug states, question begging. If it's the latter, then I don't see the problem - Why can't we have moral reasons which conflict with epistemic reasons? As others have pointed out, this is by no means unique to esoteric theories.
Posted by: Alex Gregory | October 15, 2006 at 03:25 AM
Jussi,
They thus tell me nothing after all when I think about this question from my first person deliberative perspective as an ordinary moral agent.
When you look at the theory from the first person deliberative perspective, it tells you exactly what you are supposed to do. Right? It's just if the theory is esoteric, you shouldn't be looking at the theory. But whenever you do look at, it tells you exactly what you ought to do. After all, that's what it's a theory of; it's a theory about what we ought to do. So it does exactly what we ask of it. It doesn't provide the best decision procedure for deciding what to do, but that's because it's not a decision procedure.
Mike C.
An esoteric theory is one people aren't supposed to believe, ergo, are not supposed to use the theory to *guide* their actions. They provide as you put it "an account of what you morally ought to do," as you put it, but explicitly deny that the theory should be consulted in determining what you ought to do
Yes. But why is this a fault with the theory? What's odd about the way you set up the problem is that esoteric theories are, by your definition, true. Yet you go on to provide criticisms of it. But if you admit that the theory is true, aren't you just complaining about how you don't like the theory's implications? But no matter how much you don't like it, you have to accept that it is true, given your own stipulations.
Jussi and Mike C.
Let me try to put the point differently. Suppose that human beings are such that we always do the opposite of what we believe we ought to do. In this case, whatever the true moral theory is, it will be esoteric. But how is this a criticism of the theory. We can lament that we're like this. And we can lament that, because we're like this, we ought to try to get ourselves to believe the opposite of what's true. But the problem lies with us, not with the theory, which is, by stipulation, true.
Posted by: Doug Portmore | October 15, 2006 at 09:01 AM
Doug,
you write:
'When you look at the theory from the first person deliberative perspective, it tells you exactly what you are supposed to do. Right? It's just if the theory is esoteric, you shouldn't be looking at the theory. But whenever you do look at, it tells you exactly what you ought to do'
So, I want to know what I ought to do. I ask an esoteric normative theory for help. It tells to me 'don't ask me' or as you say 'you shouldn't be looking at the theory'. How does that help me in my practical problem? If I, on the other hand, ignore the first advice of the theory and investigate what other guidance the theory gives me, then the by the theory's own lights I am acting wrongly. However, acting wrongly was the last thing I wanted to do. Therefore, the theory cannot guide me to do what I ought to do.
Alex,
I think you are right. The theories of the class (2) are the kind of two-level theories that you can find from, for instance, Hare. I don't think that the Williams objection can be put against them but I do think they face other serious problems. Scanlon on Hare is very good on this. There seems to be something morally dubious in the idea that our normal moral thinking is not aiming to be directed at what is really wrong, good, and so on, but rather there is a theoretical level on which we think about what acts really are morally wrong and our ordinaty thinking is just to instrument that gets us to act in the right way.
Posted by: Jussi Suikkanen | October 15, 2006 at 12:47 PM
Jussi,
So, I want to know what I ought to do. I ask an esoteric normative theory for help. It tells to me 'don't ask me' or as you say 'you shouldn't be looking at the theory'. How does that help me in my practical problem?
Let's just use act-utilitarianism (AU) for the purposes of illustration. It tells you what you want to know: maximize aggregate utility. Not only that, it tells you that you ought to do whatever will cause you to adopt the decision procedure that will maximize your chances of maximizing aggregate utility. So it tells you what you ought to do and it tells you what decision procedure to adopt. What else could you possibly ask for?
If you didn't know that AU was true, you wouldn't even know what decision procedure would maximize your chances of doing what you ought to do. If it weren't for AU, you might wrongly do what would inculcate in you and in others a bad decision procedure (like do whatever the Catholic Church says, or do whatever you think will be self-interestedly best for you, or do whatever you think will maximize aggregate utility). It's a good thing that you have AU to tell you that you ought not do what will inculcate these sorts of decision procedures in you and in others.
If AU is true and esoteric, then this is the best you can hope for. You may not like the idea that you should do what will make it the case that you are guided by something other than the truth in your day-to-day decision making. But I don't see how our not liking a theory's implications is a criticism of a theory that we are, for the sake of argument, stipulating is true.
Posted by: Doug Portmore | October 15, 2006 at 01:08 PM
Hi All,
Just a quick point - at a bit of a tangent, but I thought it might be worth contributing. As regards the force of the esotericity (?) objection, it seems to me that insofar as it has any force, it's captured quite nicely by Korsgaard (in her 'the Sources of Normativity') where she cites three constraints which must be met by any answer to the question 'what justifies the claims that morality makes on us?'. One of these is the transparency constraint: that is, it can't be that the true nature of moral motives must be concealed from the agent if those motives are to be efficacious - that is, the justification and the explanation must both go through once the agent understands himself completely. [I'm quoting loosely from page 16 here]. The reason why this constraint applies, if I understand Korsgaard correctly, is that any correct moral theory should be able to address the question as posed from the first-person position of an agent who demands a justification of the claims which morality makes on him. If the theory is self-effacing, then it's not clear how it can do this (because the explanation undermines the justification).
Does that seem plausible?
Posted by: Ezra | October 15, 2006 at 01:21 PM
Jussi,
Please answer the following questions:
Suppose that human beings are such that we always do the opposite of what we believe we ought to do. Do you agree that this entails that all true normative theories are esoteric? Does this mean that these true normative theories are less plausible (or less likely to be true) than the false ones that it would be better for us to believe and follow? If you answer "yes" and "no" respectively, then can you explain in what sense your putative criticisms are genuine criticisms if they're not considerations that make the truth of the theory less plausible.
Posted by: Doug Portmore | October 15, 2006 at 01:21 PM
Doug,
Again, not confident what my all things considered views on these matters is , but perhaps the (or a) problem with esoteric theories is that at least with normative theories, we ask for more than that they be true. Such theories are also subject to conditions of ... well, I'm not sure what to call them, maybe something like acceptability, publicity, action-guidingness, etc. This might be a consequence of thinking of them as *normative*, i.e., as providing us guiding norms. If so, a theory may be criticizable for being esoteric without that criticism being one that casts doubt on its truth per se. I.e., worries about esotericism are not just attempts to deploy modus tollens against a theory to demonstrate its falsity.
And I'm not sure how to understand the relationship between truth and these kinds of conditions: whether action-guidingness is a condition that determines if a theory is true or if it's better thought of as an independent constraint on the endorseability of true theories. As you say, to insist that a theory not be esoteric may be question-begging against esoteric theories, but I wonder if it's not equally question-begging to reject action-guidingness, publicity, etc., as a criterion for a theory's acceptability. I feel genuinely at a loss here.
Posted by: Michael Cholbi | October 15, 2006 at 01:38 PM
What's wrong with this argument?
P1: By Mike's definition, esoteric theories are true.
P2: Whatever a true theory implies in conjunction with other truths is true.
P3: The fact that a theory implies something that is true can never be a valid criticism of that theory.
C: Therefore, there are no valid criticisms of esoteric theories of the form: a given esoteric theory in conjunction with other truths implies X.
Aren't all the criticisms that have been offered so far of the form: a given esoteric theory in conjunction with other truths implies X?
Posted by: Doug Portmore | October 15, 2006 at 01:40 PM
Mike,
Thanks. That's helpful. But when you say, "Such theories are also subject to conditions of ... well, I'm not sure what to call them, maybe something like acceptability, publicity, action-guidingness, etc.," what are these conditions for? Conditions for being an adequate theory? Is it your view, then, that a normative theory can be true but inadequate? I suppose that a theory can be true but be inadequate to some task (say, that of itself providing a decision procedure). But so what? Isn’t it still the best theory – indeed, the true theory? Isn’t a false theory always inadequate? So which is more inadequate: a false theory that isn’t self-effacing or a true theory that is self-effacing?
Posted by: Doug Portmore | October 15, 2006 at 01:53 PM
Maybe I'm just not taking a moral theory to be what others are taking it to be. As I take it, a moral theory is an account of what makes acts right and wrong. As such, all we want from it is a true account of what the fundamental right-making and wrong-making features of acts are. Now, of course, I don't deny that we also want (and even need) to know what we should believe is right and wrong, how we should deliberate about what's right and wrong, how to morally justify ourselves to others, etc. But why suppose that there is any one theory that will do all these various things?
Posted by: Doug Portmore | October 15, 2006 at 02:08 PM
Doug,
thanks - interesting points. Here's few things I have to say in response. Let's start from this one:
'Let's just use act-utilitarianism (AU) for the purposes of illustration. It tells you what you want to know: maximize aggregate utility. Not only that, it tells you that you ought to do whatever will cause you to adopt the decision procedure that will maximize your chances of maximizing aggregate utility. So it tells you what you ought to do and it tells you what decision procedure to adopt. What else could you possibly ask for?'
So, I get the answer to my practical question and it is : 'maximize aggregate utility'. First, as a reply to any practical question this is a really bad and uninformative one. As it doesn't say what utility is, it fails to give any quidance. Second, if we forget that, there is an obvious follow-up question: 'How does one do that?'. The esoteric view says 'not by reflecting on what maximizes aggregate utility'. So, the question then is 'How should I think about what I ought to do?'. Your assumption is that AU gives an answer to this question. But, I'm not sure it does. By AU's lights, if I could get more utility by doing something else than by starting to count on the utilitarian calculus about how should I deliberate about what I ought to do, then I should be doing something else. It would be wrong for me to think about the question about the deliberation procedure in the utilitarian way then. So, now AU does not tell me what to do or how to go on deciding this. I think I could ask for more from a normative theory.
Next, you ask:
'Suppose that human beings are such that we always do the opposite of what we believe we ought to do. Do you agree that this entails that all true normative theories are esoteric? Does this mean that these true normative theories are less plausible (or less likely to be true) than the false ones that it would be better for us to believe and follow? If you answer "yes" and "no" respectively, then can you explain in what sense your putative criticisms are genuine criticisms if they're not considerations that make the truth of the theory less plausible.'
Sorry, I just cannot imagine such a possibility for Davidsonian reasons. We could not make sense of the agents in such a world. In order to make sense of them, we would rather deny that this is what they think they ought to do (they might be mistaken themselves about what they ought to do) and attribute them some other thoughts about what they ought to do. Also, even if I could imagine the scenario, I think in that case moral theorising would loose its point. Why start to think about what we ought to do if we knew that the consequence would be that we could not act in just that way?
I think your last question gets to the heart of the matter. There are two conceptions of what ethical theory is in business of doing. The first sees the project as a quasi-science that hopefully would one day reveal the true nature of the world. The interest here is theoretical. Ordinary agents can, and they are even encouraged, to ignore the results in their practical decisions. The second sees the project as a practical one. We want to form plans of how to act, and for this reason we ask first how we ought to act. Normative theory then is just a more systematic way of going of solving this problem. In the latter view, esoteric theories make little sense. The question though is, does the first project make sense? I'm not sure.
On a final note, I don't think the notion of truth can be used to distuingish the projects. The question of 'Should I do X?' and 'Is true that 'I ought to do X?' seem to be pretty same as are the answers 'I should do X' and 'It is true that 'I should do X'. Therefore, I don't think it can be the case that the questions about the truth are limited to the theoretical perspective whereas the ones that do not can be reflected on the practical level. If the theory fails to answer the ought question, it fails in the truth question too, and vice versa.
Posted by: Jussi Suikkanen | October 15, 2006 at 05:17 PM
Jussi,
I take the following excellent point of yours:
Perhaps, this is the source of our disagreement. My interest in ethical theory is a theoretical one, which is not to say that I'm not interested in the other project you mentioned, just that it's a different project to my mind. Perhaps, though, my realist views are what attracts me to the theoretical conception. If you're not a realist, then maybe the only project for you is the practical one.
In response to my query about a world in which human beings always do the opposite of what they believe they ought to do, you say, "Sorry, I just cannot imagine such a possibility for Davidsonian reasons. We could not make sense of the agents in such a world."
Can you explain this further? And please note that I didn't say that the reason they always do the opposite of what they believe they ought to do is that they always intend to do the opposite of what they believe that they ought to do. I was imagining a case like that of the direct act-utilitarian who tries to follow the utilitarian calculus as a decision procedure but fails miserably. Imagine then that human being are inept, not that they always trying to do the opposite of what they judge that they ought to do. They try to do what they believe they ought to do, but they always fail spectacularly.
Posted by: Doug Portmore | October 15, 2006 at 05:34 PM
Doug,
You say above,
. . AU, by itself, doesn't entail any view about what your intentions, motives, or goals ought to be.
But let me see if I have this right. You seem to be evading problems for AU by denying that AU offers any recommendation concerning which actions to perform or which actions we have most moral reason to perform or intentions to have or goals to pursue. So you seem to be endorsing AU, and claiming it is consistent with RU, MU, and GU. And further you seems to be claiming that a moral agent that endorses AU and RU and lives in accordance with MU and GU might nonetheless lead a perfectly moral life.
MU. One ought always to act in a way that minimizes utility or one ought never perform what AU claims is the right action.
GU. And one ought to have as an ultimate moral goal the minimization of overall utility.
RU. An act is morally right iff. it maximizes utility, but the moral rightness of an action gives no moral reason to perform the action.
AU is consistent with MU and RU, on your view, since AU entails nothing about what actions we ought to perform or have most moral reason to perform. AU is consistent with GU since AU entails nothing about what moral goals we should have. This seems less than plausible to me, but it seems to be what you are defending. Is that right?
Posted by: Mike Almeida | October 15, 2006 at 05:53 PM
Mike A.,
You seem to be...denying that AU offers any recommendation concerning which actions to perform or which actions we have most moral reason to perform.
How do you get this from what I wrote?
On my understanding of what a moral reason is, RU is incoherent. But why don't you tell me what you mean by a 'moral reason'.
And AU certainly tells you what you morally ought to do and morally ought not to do, for 'impermissible' just means 'what one morally ought not to perform'. That would seem to be a moral recommendation concerning which actions to perform.
My claim was that AU says nothing about what our beliefs, intentions, goals, motives, or any other of our mental states should be. It says nothing about what the norms of assertion are. It says nothing about epistemic matters. And it says nothing about what the aim of belief is.
Posted by: Doug Portmore | October 16, 2006 at 08:36 AM
And AU certainly tells you what you morally ought to do and morally ought not to do, for 'impermissible' just means 'what one morally ought not to perform'
Hold on. Haven't you been insisting that AU just tells us just what makes an action right? As such it is a metaphysical principle telling us what properties confer rightness on actions. Do you want to say as well that it tells us what to do? Maybe you've been saying that, too. If so, I missed it.
Posted by: Mike Almeida | October 16, 2006 at 09:33 AM
Mike A.,
Isn't the property of being morally right just the property of being that which morally ought to be performed?
Perhaps, if you think that "Utilitarianism entails that truth is not the aim of belief (nor a norm of assertion)," you should explain what you take utilitarianism to be and explain how it entails that truth is not the aim of belief. My point was mainly that I don't see the entailment that you claim there is.
Posted by: Doug Portmore | October 16, 2006 at 09:49 AM
Isn't the property of being morally right just the property of being that which morally ought to be performed?
If you're happy with that, then suppose that A is some assertion and A has the property of being morally right.
1. :. A ought morally to be performed. From the claim above.
2. A is right iff. A maximizes overall utility. From AU.
3. :. A maximizes overall utility. From 1,2 and claim above.
4. It is not in general true that A maximizes overall utility iff. A is true.
5. :. It is not in general true that A ought to be performed iff. A is true. From 1,2,3,4.
It is (5) that I claim is inconsistent with the norm of assertion that roughly you ought to assert A only if A is true. Which of (1)-(5) is mistaken?
Posted by: Mike Almeida | October 16, 2006 at 11:04 AM
Doug,
thanks that's helpful. I can imagine a world where utilitarians are instrumentally irrational. But, in the relevant sense these people are doing what they believe they ought to do. They believe they ought to phi, that by psying they would phi, and that they therefore ought to psi. And, they do. It's just the instrumental belief is false. I don't see how this would make all true moral theories in that world esoteric.
This:
'My claim was that AU says nothing about what our beliefs, intentions, goals, motives, or any other of our mental states should be.'
is something I never understood. If a theory says what we ought to do, surely this implies what we ought to intend to do, and so on. Nameny to do what we ought to do.
I do understand that many people see normative ethics as a theoretical pursuit to discover the normative reality. I used to think this. But, now I am hesitating. The reason is that we are interested in goodness, wrongness, oughts and so on. These are not theoretical notions scientifically defined but rather everyday notions that get their meaning through how we used them in our moral community. It would be odd if they came to denote something through our use that was hidden us and that could be discovered by technical philosophical investigation. I just cannot imagine that there was an very complicated answer to what things are wrong, good, and so on that could come as a surprise to us, that would be difficult to comprehend, and which we could ignore in our everyday life. How could we have come to mean that by our use of these ordinary notions? I'm worried that I'm getting into too much Wittgenstein...
Posted by: Jussi Suikkanen | October 16, 2006 at 11:16 AM
Mike A.,
5. :. It is not in general true that A ought to be performed iff. A is true. From 1,2,3,4.
What follows from 1, 2, 3, and 4 is:
5*: It is not in general true that A ought morally to be performed if and only if A is true.
5* is inconsistent with the norm of assertion if and only if the norm of assertion is that you ought morally to assert A only if A is true.
I didn't think that this is what the norm of assertion was. But if it is, then I think that it is quite obviously false and so we shouldn't worry about a theory just because it inconsistent with it.
Posted by: Doug Portmore | October 16, 2006 at 11:50 AM
Jussi,
I don't see how this would make all true moral theories in that world esoteric.
Maybe I'm not being clear. Suppose the theory in question is T. Suppose that all human beings believe T and so try to do what T says, which is to try to E (e.g., refrain from violating others' rights, maximizing utility, or whatever). Further suppose that for some reason (they have false means-ends beliefs, they're incompetent, or an evil demon is messing with them) when they try to E, they always end up doing the opposite of E (e.g., violating others' rights). If, however, they believed that they ought to try to ~E (e.g., violate others' rights), they end up E-ing (e.g., refraining from violating others' rights). So wouldn't T be an esoteric theory on Mike's definition, that is, isn't it the case that it would be better that T not be generally believed or accepted?
Posted by: Doug Portmore | October 16, 2006 at 12:09 PM
Jussi,
If a theory says what we ought to do, surely this implies what we ought to intend to do, and so on.
I'll accept that if a moral theory says that some act token, A1, is the one, of all my alternatives, that I morally ought to perform, then I ought to intend to do A1. But the following seems false: from both the fact that a moral theory says that I ought to maximize aggregate utility and the fact that my performing A1 is what would maximize aggregate utility, it follows that when I perform A1 I ought to have the intention of maximizing aggregate utility.
Posted by: Doug Portmore | October 16, 2006 at 12:18 PM
Doug,
thanks. The case is now coming clearer but it's also a different one than the one we started with. Now, there's us as we are and them who cannot act as they believe they should. We have no reason not to believe T and thus it is not esoteric for us. But, place yourself to their situation. Could you do normative ethics if you know that what ever you came to believe you could not do? It would be a funny situation. In fact, in that case, if we assume ought implies can, we could ensure that any random possible normative theories is false just by believing it. No theory could be true for us. Why attempt to then find out which one is?
Right to the last comment. But, already the first admission implies that
'My claim was that AU says nothing about what our beliefs, intentions, goals, motives, or any other of our mental states should be.'
is false. I take it that AU allegedly implies for every situation for all of us a token of act we ought to do. So, in each case it also allegedly implies what we ought to intend to do. It is probably unable to tell us how go on figuring out what is the right thing to intend to do in each situation.
Posted by: Jussi Suikkanen | October 16, 2006 at 01:15 PM
Very interesting discussion!
I think the problem with a radically esoteric moral theory is that it becomes impossible to do the right thing because it is right. For if you do in fact perform the right act, you won't know that it is right.
I am not Kantian enough to think that ONLY actions performed out a sense of duty have moral worth. But I do think it must regularly be possible to do the right thing in part because you see that it is right. And radically esoteric moral theories, if I understand them, deny that possibility.
Posted by: Eric Wiland | October 16, 2006 at 02:55 PM
You cannot observe the norm of assertion and the utilitarian moral norm. Right? So the recommendation of one norm is not consistent with the recommendation of the other norm. Makes no difference to me that you happen to believe this norm of assertion is false. The problem remains that utilitarians cannot be trusted to make true assertions. That is bad news, especially in coordinating behavior.
Posted by: Mike Almeida | October 16, 2006 at 03:06 PM
Thanks Jussi. Yes, I was admitting that my claim was false. But all I need for the point that I was trying to make is:
"But for the cases where AU requires that we intend to do some morally required act token, AU says nothing about what our beliefs, intentions, goals, motives, or any other of our mental states should be."
Regarding this: "if we assume ought implies can, we could ensure that any random possible normative theories is false just by believing it. No theory could be true for us."
I'm not sure that you have the relevant sense of 'can' in mind here. I'm also not sure what it means for a theory to be true or false for us. I only understand "true" as a monadic predicate. But I get what you're driving at, and I'll think about. Thanks.
Posted by: Doug Portmore | October 16, 2006 at 03:07 PM
Mike A.,
I don't think any moral (or nonmoral) person could be trusted to always make true assertions, and I suspect this poses no special problem for coordinating behavior. I suspect that regardless of which moral theory is correct, we will go on coordinating behavior just as well as we currently do. In fact, we already are.
Posted by: Doug Portmore | October 16, 2006 at 03:12 PM
Jussi says:
I think this is exactly right, and seems to me to mirror what Michael Smith says in The Moral Problem, namely, that the two central features of a moral theory are its objectivity and its practicality. Now an interesting question concerns the proper relation that exists between these two features. If we describe the first feature as the ‘standard of rightness’ (or the set of moral truth-conditions) and the second one as the ‘decision procedure’, we could ask the following meta-ethical question: What is the proper relation between standards of rightness and decisions procedures? I can think of two answers, that alternatively privilege one or the other feature. We might think that the relation is conceptual, and that is bridged by a principle like John Broome’s krasia, which says, roughly, that rationality requires of me that, if I believe that I ought to do something, I intend to do that thing. Alternatively, we might think that the relationship is normative, and that the correct decision procedure is the one that ought to be adopted, according to the truth conditions set by the standard of rightness.
Like Prof. Portmore, I consider that the business of ethics is the “quasi-scientific” one of providing us with the structure of the normative world. But the problem with this view—and concerns like this might have led Jussi to reconsider his initial sympathy for it—is that it seems to undermine what we take to be the intuitively plausible requirements that rationality imposes upon us. In this picture of the nature of normativity, whether we ought to believe this or that has nothing to do with our prior beliefs or intentions, but simply with the causal role those beliefs have in promoting whatever it is that our normative theory says ought to be promoted. There is something hard to digest in this implication.
Posted by: Pablo Stafforini | October 16, 2006 at 03:55 PM
Sorry Doug. I put the 'for us' to the wrong place. It was supposed to be related to the theory and not truth. That is, in that case the theory, understood as what determines the moral status of our acts, could not be true, full stop. I fully agree about the truth bit. I'm not sure what sense of 'can' I had in mind. I just imagined believing a theory, trying to do it, and, as the story was told, this would never come to happen.
Eric,
I like your point. I know there are people who think that it is better to do the right thing because of the features that make it right and not because it is right. But, even if this was right, surely it would have to at least make sense to act from the less than admirable motivation that it is the right thing to do. I'm not sure though that the esoteric views have to deny this. They usually say that it is better that we do not believe the theory - not that we cannot do so. If we can, then we can be motivated by the theory even though we should not be.
Posted by: Jussi Suikkanen | October 16, 2006 at 03:57 PM