Some Of Our Books


« Final CfA: NOWAR 2 (Due March 1, 2013) | Main | New Soupers »

February 16, 2013


Feed You can follow this conversation by subscribing to the comment feed for this post.

Hi Jussi,

I am not sure I fully understand your suggestion in section 3. But a problem you might run into here is that you seem to be assuming more agreement on moral issues than there actually is. Let the society in question be the US or some US state, and let the moral issue be that of abortion, same sex-relations, euthanasia, gun-control, or whatever, and you will find that with regard to these issues, there is a lot of disagreement, and a lack of a generally agreed upon moral code. What there might rather be are competing blocks, or large fragments, within the society within which there is broad agreement about at least some of these issues. So it seems to me that in addition to what you call counter-cultures, you also have to take into account much larger groups that have internalized only partially overlapping moral codes. Now of course if we make the moral rules or principles in question extremely general, and we leave them open to more or less conservative interpretation, then we might end up with more wide-ranging agreement. But then we will also need discuss what the best, or most justifiable, specifications of these broad and widely accepted moral rules are.

Hi Sven

thanks. I think I mentioned above that I am also thinking about partially overlapping cultures. Also, I am not really thinking about moral principles but rather more moral sensibilities. I agree that it will be very difficult to capture or pick out a majority code from the actual world (and in fact, the acceptance level of this will be very low in our world). But, yes, the idea then is that in the actual world we can describe the remaining both overlapping and non-overlapping non-internalisers and how many of the non-internalisers are like that in proportion.

When we then go compare the hypothetical principles, we stipulate that the non-internalisers in those worlds are in proportion similar to the non-internalisers in our world (even if their absolute number may vary). I'm also thinking that the overlappers will be overlapping with the compared world's code rather than with ours.

Hi Jussi,
Could you say more about why you think that your preferred solution (3) avoids your epistemic concerns about the first solution? If I understand correctly, the first solution is going to involve some very complicated (impossibly complicated?) calculation on more or less a priori data. On the other hand, it seems like the third solution would require vast amounts of empirical data--unless we make rough guesses, in which case, why couldn't the defender of the first strategy do the same thing?

(It's also possible I'm just misunderstanding something.)

Hi Preston

that's a good question - and I am very worried about the epistemic demandingness of all these alternatives. I guess I was thinking that (3) would be at least tiny bit less demanding than (1) for a couple of reasons. For one, it would limit the number of the compared worlds somewhat from (1). Secondly, as you note, we could use some of our empirical knowledge in comparing these worlds. In the actual world already many kind of different ethical principles and sets of them have been adopted. By keeping the kinds of actual people who are non-adopters constant, we could use our information about how these principles work together with the actual people who have not adopted them.

I do know that there is one way of making the epistemic demandingness smaller in both proposals. This would rely on going down something like Brad Hooker's route. On this alternative, we consider what improvements we can recognise for the principles we have in the terms of the principles' consequences. So, under (3) we could think that right and wrong depends on what principles we could identify as having better consequences than the ones we already have assuming n% of acceptance and that the rest of people have proportionally similar counter-cultures than in our worlds. Under (3) we could think of whether we could recognise improvements with relative to all counter-cultures when we do the averages. I think even here (3) would have an advantage as it would be easier to recognise improvements.

Of course, if you are a realist about ethical facts, then this kind of unknowability will be harmless. However, if you do have anti-realist inclinations (which often motivates views like contractualism) then the epistemic demands are a strike against views along these lines.

I suspect that there is no really satisfactory solution to this problem. However, I think that the best stab that we could make at such a solution would have to be sensitive to what we know about empirically about how people respond to societies in which different sorts of moral codes are very prevalent. Suppose, for example, that we're thinking about a moral code that is very Victorian where sex is concerned, and that we have empirical reason to think that in a society in which enough of a teaching effort is made to instill such a code in (say) 90% of the population, many people in the other 10% are likely to react violently against the code and be wildly promiscuous or fetishistic. For most moral codes that we're interested in, we probably can't really make very confident predictions about what counter-cultures would emerge if enough of a teaching effort were made to induce a given percentage of the population to internalize the code. And, of course, there may be multiple forms that this "teaching effort" could take, complicating the problem further. That's why I don't really think there's a solution. But I think that attempted solutions that ignore these sorts of empirical considerations get off on the wrong foot. If I understand your proposals correctly, Jussi, then that applies to all three, although I'm not sure that I'm interpreting the last one correctly.

Hi Dale

I agree. I worry that this really is a killer objection. One idea I've been thinking about is to go back to the 100% in comparing the codes and forget acts of self-improvement, punishment, disagreements and the like. In this way, we could construct an ideal theory for ideal circumstances as Rawls called it. This might still teach us something about the nature of morality. Of course, we would need to find some other way of figuring out what's right and wrong in the non-ideal circumstances and this presumably would make all the theories discussed less important given that in the actual world we are in the non-ideal circumstances.

What you correctly observe makes the problem even more difficult. In all the solutions, I sketched out I assumed that the counter-cultures are not sensitive to the majority moral code - they pretty much are doing their own thing. Even then different counter-cultures might have different consequences given that what kind of interaction happens between the counter-cultures and the majority morality. However, as you correctly point out, it may well be that the nature of the counter-cultures is sensitive to the majority codes. I do think that making this plausible assumption makes the problem even more intractable despite the empirical evidence we have about how the counter-cultures are shaped by the majority moralities. If we had to compare vast number of potential morarilities in the circumstances of imperfect adoption, the consequences would in this case really be unknowable.

I'm starting to think that Brad Hooker was right. I think the best we can do is to start from the actual world and our conventional morality. We can then think whether we could have recognisably better codes for our circumstances. In trying to recognise these codes, we should take into account the number of people who have not internalised the code yet, whether that number would change if we adopted the new code, and whether the behaviour of the non-adopters would change. If we can recognise an improvement, the right and wrong would be determined by that code. At the moment, I cannot see anything better than this really.

I've been trying to think through a different kind of solution to this problem, or rather a different version of RC/RU for which the problem doesn't arise. That is an "individual" ideal-code theory, according to which the moral code that I ought to obey is the one that it would be optimal for me to internalize, given the world as it is. Needless to say, even if this sort of view doesn't face this problem, it faces lots of others.

Hi Dale

that's very interesting. I'd like to hear more about this version of the theory.

My first inclination is to say that you can generate the theory even for the individual theory. You've got two options. Either you believe that all your future time-slices will internalise the compared moral codes. In this case, the moral code will not be able to generate what Ross called duties of reparation (things like the requirement to compensate for harm done). To avoid this problem, you might think that only less than 100% of your future time-slices will have internalised the code so that you can evaluate what the best policy for reacting to your own bad actions is. But, then we can ask what are the rest of your timeslices doing. Whatever we stipulate seems somewhat arbitrary and whatever we stipulate might have consequences to what principles come out best in the individualistic comparisons.

This is just to point out that this seems to be more general problem with the theories of this type.

Hi Jussi,

It could well be that, whichever individual code we take, that code will have one set of consequences if one group of people have internalised it and another set of consequences if different people have internalised it. So, who are the people we should use in the comparisons?

Doesn't consequentialism have a standard solution to problems of this kind, namely, expected value?

For each code C, consider the expected value of the proposition: n% of people internalise C. As you say, there are different ways this could be true: different sets of people could be the ones who internalise C. So the expected value will be a weighted average of the values of these different ways it could be true. The ideal code will be the one for which this proposition has the greatest expected value.

Hi Campbell,

thanks. I may be missing like something but this really sounds like my first solution candidate (1) and I think this really is one plausible avenue which rule-consequentialists and even contractualists and Kantians should take (even if they should make relevant changes to how to understand expected value in non-aggregative way).

One thing we would need to decide is whether all the alternative counter-culture scenarios for a given code are equally likely or whether we should consider some of them more likely than others.

I do have a couple of worries. My first worry is that this will make all the theories of this sort very, very epistemically demanding. Now, you might think that this is not a problem as these views are more criterions of wrongness rather than deliberation procedures. But, I worry that this sort of complications will really make the wrongness facts unknowable which will severe the connection between wrong and our practice of blaming people.

I think my second worry is even more fundamental. These views are supposed to tell us something very fundamental about either what wrongness is or what makes acts wrong. Now the idea is that it is in part constitutive of right and wrong what consequences codes have even in worlds in which very bizarre counter-cultures have been adopted. That this is so far-fetched really worries me too.

Sorry, Jussi, I didn't read your post carefully enough.

Still, your solution (1) seems not quite the same as what I suggested. Notice, whereas my solution employs the notion of expected value, yours is expressed in terms of "expected consequences". I'm not quite sure you mean by this.

Perhaps I can make the point this way. It is unclear to me how the expected-value solution could work for contractualist and Kantian theories. The mathematical concept of the expected value of a random variable only makes sense when the possible values of the variable are numbers. In the case of consequentialism, these numbers may be taken to represent the goodness of possible outcomes (propositions or states of affairs or whatever). What would they represent on a contractualist or Kantian approach?

Perhaps this is related to the problems for non-consequentialist theories raised by Michael Smith and Frank Jackson in 'Absolutist moral theories and uncertainty'.

Hi again, Campbell

also I hope you are well. I did have expected value of the codes in mind in (1) but I was sloppy and expressed the idea badly. Sorry about this.

Here's just a rough sketch of how we could run something similar in the contractualist framework. Assume that we are comparing two codes A and B for reasonable rejectability. We compare these codes under internalisation rate n which is less than 100%. We are considering the codes in the circumstances of three countercultures p,q, and r.

Each pair [A,p], [A,q], [A,r], [B,p], [B,q] and [B,r] will create the relevant individuals of the imagine worlds a standpoint. There is a degree of how burdensome or objectionally or not choiceworthy these standpoints are. Let's assume that we can give a numeric value for such personal burdensomeness of living under a giving code.

For each individual, we can then calcute expected burdensomeness of a given code. For individual S, the burdensomeness of A for example is (probability of counterculture p when A has been internalised (we can assume that this is .33) times the burdensomeness of S's life in [A,p]) plus (the probability of counterculture q when A has been internalised (.33) times the burdensomeness of S's life in [A,q]) plus (the probability of counterculture r when A has been internalised (.33) times the burdensomeness of S's life in [A,r]).

Each individual then has an expected burdensomeness of their life under codes A and B. Assume that individuals can make objections to A and B on the basis of expected burdensomeness of these codes for them. The person whose life under A has the highest expected burdensomeness can then reasonably reject A if no one's life under B has equally high expected burdensomeness.

Modelling the Kantian approach in the same way might more artifical but I take it this can be done in terms of expected effectiveness of your maxims under different situations when this is understood roughly in the same way as burdensomeness under contractualism.

Thanks, Jussi. That's interesting.

This is a bit tangential to your original point, but I wonder what you think of the following objection to the view you just sketched.

Suppose there are only two individuals, I and J, and two possible outcomes (or counter-cultures), X and Y. And suppose the burdensomeness of each code for each individual in each outcome is as shown in the following table:

          Outcome X   Outcome Y
           I     J     I     J
 Code A    5     5     5     5
 Code B    9     0     0     9

For both individuals, the expected burdensomeness of A (5) is greater than the expected burdensomeness of B (4.5). So both individuals can reasonably reject A. However, one might think that A is the better code, because, although there will be greater total burdensomeness under A, it will be more equally distributed.

Perhaps you should say that the optimal code is the one that minimises the expected maximum burdensomeness (EMB), rather than the maximum expected burdensomeness (MEB). In the above example, the EMB of A (5) is less than the EMB of B (9), but the MEB of A (5) is greater than the MEB of B (4.5). But I'm not sure how minimising EMB could be connected to the story about reasonable rejectability, which seems to fit better with minimising MEB.

Hi Campbell

thanks for this. This is indeed getting very interesting. So, first, let me know that I am little hesitant about having moral intuitions about mere numbers without knowing what the numbers stand for (this is I am sceptical about Ross' objection to consequentialism too). Thus, if we described what makes up these numbers, we might agree that code B is better.

If I've understood EMB correctly, I've got one worry about it. I also got two ways in which MEB might be defended against this kind of objections.

My worry about EMB is about cases in which instead of two outcomes/counter-cultures we have thousands. Now imagine that there is a code which is very good for an individual in all of these thousands of scenarios except in one in which fairly bizarre (and also by stipulation unlikely) counterculture has been internalised. In this situation, according to EMB (I hope I am getting this right), the agent could reasonably reject this code despite the fact that it is very very good for her in all except one of the scenarios. This seems unintuitive to me. The code that minimises the expected maximum burdensomeness could therefore be fairly burdensome to all if for all other codes there was just one scenario that was more burdensome to all in a unlikely scenario. This is just a hunch though.

Let me also note two ways in which MEB could be defended. First, you could built some notion of equality inside the burdensomeness of individuals' lives. Thus, it would be part of the concrete personal standpoint - how the life goes for an agent - how she is related to other people. Finding your life unequal to others would thus be making your life more burdensome. Only after this would be assign numbers to the burdensomeness of life. This might make it the case that once we have the numbers MEB actually gives fairly intuitive codes out even if the numbers might look bad.

The second option would be to go prioritarian. This would be to think that, when we calculate the expected burdensomeness of codes with respect to a range of outcomes, we multiply the burdensomeness of the most burdensome standpoints in these worlds by some factor. This too might help with the equality worries.

Here's how to calculate the EMB of a code C. For each possible counter-culture X, find the maximum burdensomeness experienced by any individual under C given X. Then multiply this by the probability that X would emerge given that C is adopted. Then add together all the results.

I don't think what you say about EMB in your worry is quite right. As I said before, it is hard to connect EMB with reasonable rejectability. The fact that one code has a greater EMB than another does not obviously provide any particular individual with grounds for rejecting the former code. In my example above, Code B has a greater EMB than Code A, but it is unclear how either individual could appeal to this fact in order to reject B. Indeed, we might expect that neither individual would even want to reject B, because both have a greater EB in A. This may be a case where the code which is preferred by everyone is not the best code.

I think these issues are related to so-called 'ex ante' and 'ex post' prioritarianism, as discussed e.g. by David McCarthy in 'Utilitarianism and Prioritarianism II' (a really nice article).

Hi Campbell

thanks for clarification. I hadn't got EMB right. Can I ask one more question about this proposal? When you say in the end 'then add together all the results?', is it that we add different individuals' burdens under different cultures (whoever's is the most serious one under the code + counterculture pair) times their likelihood's?

If this is the case (and this I think made me read the view wrongly), then we are making interpersonal aggregations are ruled out in contractualism. This explains, as you say, why there isn't a connection to reasonable rejection.

Thanks for the suggestion about the McCarthy paper.

I'm afraid I don't quite understand the proposed solution, but in any case the general proposal to rate our moral principles as a function of how they would work in some ideal or semi-ideal world (Kant, Scanlon, etc.) always struck me as rationally unmotivated, since we don't live in such a world (in this I agree w/ the remark Jussi ascribed to Hooker, although I found Hooker's presentation of his view less clear than this simple summary, at least in _Ideal Code_). Such a test may be a /necessary/ condition of our principles being moral, but it is certainly not /sufficient/, since the good and bad effects of additional persons conforming to certain principles is sometimes non-linear. Some principles may work fine in an ideal or semi-ideal world, but awfully in others.

What we need are principles that work wherever we happen to be, including ones for the current world, of course--but we must also prepared to act differently if the number of people following certain principles changes which in turn changes the effects of our acting on our current principles, just like we should be prepared to make similar changes under other circumstances. All versions of ideal-world tests seem to simply be proposals that we /exclude/ from consideration any principles which would specify--in their antecedent description of under what circumstances we should perform certain consequent actions, or value certain things, etc.--the class of descriptions including features like "and when the number of persons following similar principles is M..." I have never understood why anyone would want to rule out of bounds such principles in a fundamental moral test.

So if you start with an ideal world test, yes, you'll need some more or less complex way to then try to rule out principles which lead to this result, but this is the long way around. I think we should instead abandon the ideal world condition, and go straight to moral supervenience, requiring that our moral principles be acceptable wherever their antecedent conditions hold, with no restrictions on what kinds of antecedent conditions are to be considered. This makes the test of whether we want *any* number of persons following our principles be the guide, which then forces us to adopt (in many cases) principles which prescribe different actions under different circumstances, including those in which different numbers of people follow just those principles.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.