Some Of Our Books

Categories

« Why Afterlifism Isn't a Ponzi Scheme | Main | Second Annual Workshop for Oxford Studies in Political Philosophy »

January 06, 2014

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I'm a little unclear on one thing, Doug. In (C2) and (C5), you use the genitive expression, "your ending up with more money rather than less money." Can you put that some other way? It looks suspicious to me. More money... than what? (The subordinate 'rather than less money' seems to do no work -- the contrast with 'more x' is always 'less x', right? What I'm asking is, more than what, less than what?)

If you could spell out the actual amounts, instead of 'more money' and 'less money', it would be clearer, but I think that's hard to do. I worry that there's some trick embedded there.

Doug: I'm more than happy to reject (C2). I don't think it's true that I ought to believe that choosing to take what's in both boxes is a necessary means for obtaining more money than by taking one box only. I believe the paradox presupposes backwards causation, and that sincerely intending to take one box only *will* result in you getting more money than two boxes. Because I think I ought to believe the paradox presupposes backwards causation of this sort, I think I ought to believe that taking one box is a necessary means for me to end up with more money -- contra (C2).

Hi Marcus,

Why believe that the paradox presupposes backwards causation? There are other possibilities that would explain the high correlation between the predictor's predicting you'll take both boxes and your taking both boxes, such as common cause.

I'm a little unclear on one thing, Doug. In (C2) and (C5), you use the genitive expression, "your ending up with more money rather than less money." Can you put that some other way? It looks suspicious to me. More money... than what? (The subordinate 'rather than less money' seems to do no work -- the contrast with 'more x' is always 'less x', right? What I'm asking is, more than what, less than what?)

If you could spell out the actual amounts, instead of 'more money' and 'less money', it would be clearer, but I think that's hard to do. I worry that there's some trick embedded there.

Hi Jamie,

I'm making a comparison between choosing what's in both boxes and choosing only what's in B. I'll end up with more money by choosing what's in both boxes than by choosing only what's in B: more precisely, I'll end up with exactly $10,000 more. Do you deny that if I choose both boxes I'll have $10,000 more than I would have if I had chosen only box B?

If you want, we can change things as follows (although I think that the original is preferable):

(C2*) You ought to believe that choosing to take what’s in both boxes is a necessary means to your ending up with an additional $10,000 -- additional to whatever is in box B.

(C5*) You ought to want an additional $10,000.


(C6*) If you ought to want E and you ought to believe that your X-ing is a necessary means to your getting E, then you should, other things being equal, intend to X. And other things are equal.


(C7*) If you should intend to X, then you should do X.

Oh, okay.
Then I think (C6*) is not true, because of the last clause. I don't think other things are equal.

I do think that choosing both is a necessary means to getting $10,000 in addition to what's in box B. And you want and ought to want $10,000 in addition to what's in box B. But the thing is, if you choose both boxes there will be much less money in box B than there will be if you choose just one box. So it does not seem to me that other things are equal.

I can see that someone persuaded by the causal reasoning is going to be quite certain that there is some good sense in which other things really are equal, and that a version of (C6) will still be true when that sense is used; it's just that this is going to be exactly the point at issue between the person persuaded by the causal reasoning and the person persuaded by the evidential reasoning.

Hi Jamie,

I knew adding an other things being equal clause was a bad idea. Would you reject both of the following?

(C6**) If you ought to want E, you ought to believe that your X-ing is a necessary means to your getting E, and you ought to believe that X-ing will not have any adverse effects and is not intrinsically bad, then you should intend to X.

(C6b) You ought to believe that X-ing will not have any adverse effects and is not intrinsically bad.

Well, yes, I guess I would reject (C6**). I mean, it in effect says that the outcomes that are connected to your actions by evidence but not cause do not matter. Right? And that's what is at issue between the evidential theory and the causal theory.

Have I interpreted (C6**) correctly?

Hi Jamie,

Maybe that's right, but I'm not sure. What would you say is the evidentialist's alternative to C6**? Perhaps, it's this:

(E6) If you ought to want E (say, utility) and you ought to believe that your X-ing is a necessary means to your getting the most evidentially expected E, then you should intend to X.

But even E6 doesn't get you to the conclusion that you should choose only what's in box B. After all, there is no reason for you to believe that your choosing only what's in B is a MEANS (let alone a necessary means) to your getting the most evidentially expected E. So the antecedent is false.

I, of course, concede both that "if you choose both boxes there will [most likely] be" no money in box B and that if you choose only what's in box B, there will most likely be a $1,000,000 in box B, but that doesn't mean that your choosing only box B is an action by which it is brought about that there is $1,000,000 in B. Choosing only box B isn't a means to bringing about the outcome in which there is $1,000,000 in B.

What the evidentialist would need is something like:

(E6*) If you ought to want E (say, utility) and your X-ing has the most evidentially expected E, then you should intend to X.

But E6* is false given that the relevant ought here is not the fact-relative ought, but the subjective ought. It is not the case that you subjectively ought to intend to X if you ought to want E (say, utility) and your X-ing has the most evidentially expected E. For suppose that you ought to believe that your X-ing will ensure that you get absolutely no E. It that case, surely, you shouldn't (subjectively speaking) intend to X.

I'm a little lost.

You're right that no substitute principle that uses the idea of 'means' is going to be amenable to an evidentialist. 'Means' is causal, after all, and so is 'bringing about'. We evidentialists think causal influence is highly overrated.

I thought we already had very a good substitute for means/ends principle reasoning, and it was just decision theory. (Conform your preferences to the axioms!) Do we really have to bring back the plodding old dichotomous dogmatism now? If so, I guess the proper evidentialist substitute for "x is a necessary means for y" would be "pr(y|~x) = 0."

Hi Jamie,

I should say that I'm a bit out of my depth in that there is a lot that I haven't read on Newcomb's problem. Moreover, I'm not well versed in decision theory. So please be patient. I appreciate your help.

You say that a good substitute for the means/ends principle is to conform your preferences to the axioms. But what do evidentialists use to bridge the gap between preferences and intentions if not a means-end belief? Is it a belief about evidentially expected utility?

Is the alternative something like this, then:

(E6**) If you ought to want E (say, utility) and you ought to believe that your X-ing has the highest evidentially expected E, then you should intend to X.

If so, then my approach is a dead-end. The same issues will appear when looking at attitudes as when looking at acts. I don't find E6** intuitively plausible, but that's probably just because I'm not an evidentialist.

Just as an aside, since I don't think you meant this to be important: according to decision theory utility is not really something you want (or ought to want). It is more of a measure of how much you want other things.

I haven't thought much about how to bridge the gap between preference and intention. I guess I'd try the simplest approach first: intend the act you prefer. Is there a problem with that?

Also, are there interesting examples illustrating the gap between preference and intention? It seems like there would be something badly wrong with someone who didn't intend the thing she preferred. That might be someone who was so irrational that formulating normative advice for her would be pointless. But quite possibly I'm missing something interesting there.

Hi Jamie,

I doubt that you're missing something. I think rather that I am probably pursuing a dead end. But let's see, because I'm still not clear on how the evidentialist should respond.

Now if you go with "you ought to intend to do what you ought prefer to do," then won't the one-boxer need to hold that you ought to prefer to do the act that has the most evidentially expected utility? But what I should want is more money, not to perform the act with the most evidentially expected utility.

Suppose I choose only what's in box B. Further suppose that the predictor was wrong and thought that I would choose both boxes. I, then, get $0. Of course, I performed the act that had the most associated evidentially expected utility -- the act of choosing only what's in box B. But that's not, it seems to me, what I care about, nor what I ought to care about. I care, and ought to care, about the money I end up with. And I would have ended up with more money had I chosen to take what's in both boxes.

So I guess that I would deny that I ought to prefer to perform the act with the most evidentially expected utility. I think that I shouldn't care about whether the act that I perform has the most evidentially expected utility. Instead, I should care only about whether I ended up with more money that I would have had I performed some alternative act. But I'm guessing that you'll say that's just what's at issue in the debate.

I was thinking (probably, wrongly) that those on all sides of the debate could (or should) agree about how to move from desires to intentions and there is not reason to desire to perform an act with the highest evidentially expected utility.

I don't really get it.

Aren't all of the oughts indexed to probability distributions? That's how I think it works, anyway. (Happy to say more about this if you want.) So pick a probability distribution, p, and index the ought to that -- no fair switching in the middle. Then surely it will turn out that the act you ought[p] to prefer is the act with the highest expected[p] utility. No? Counterexample?

If you now pick another probability distribution, q, then it will likely turn out that there is a different act you ought[q] to prefer, and that one has the highest expected[q] utility.

Do you not like the indexing idea?

I guess that what I'm saying boils down to this: I see why I ought to prefer that my act has the greatest causally expected utility, but I fail to see why I ought to prefer that my act has the greatest evidentially expected utility. And it seemed to me that the claim that I ought to prefer that my act has the greatest causally expected utility was less controversial than the claim that agents ought to maximize causally expected utility. But you'll point out that the act that we ought to perform and the act that we ought to most prefer are inextricably linked. And so the issue is as much about what we ought to prefer as about what we ought to do, and, thus, my approach makes no headway merely by switching the conversation from acts to attitudes.

Neither form of decision theory tells you that you ought to prefer that your act has the greatest expected utility.

The theories tell you to conform your preferences to the axioms. The two kinds of theories have different sets of axioms, so they give you different advice, although the difference only shows up in very exotic situations.

Find someone whose preferences conform to Richard Jeffrey's axioms and I'll show you someone who prefers the act with the greatest desirability. (He uses 'desirability' instead of 'utility'.) You just follow the axioms, and the maximizing thing takes care of itself.

I have a chapter of the Oxford Handbook of Rationality (Mele & Rawling) on decision theory and morality. I think I can even send you a nice pdf if you want.

Yes, please. And thanks for your help.

Doug: Do you get the impression from one-boxers that they'll be willing to grant (C1)? I'm thinking they will fight you on that.

Jamie, I agree with all you said in your last comment, and at some point in my life I used to think that everyone else agreed with these points too. But doesn't Weirich take utility to be primitive (something like a comparative measure of strength of desire, and strength of desire is not conceptually linked to preference) and a principle roughly like "Choose the act with the greatest (or, such that no other alternative act has, etc. etc.) expected utility" to be a (true) substantive normative principle? I am also pretty sure that he denies that utility should be defined as something like "that which is maximized by rational action", so I think he would deny that "You just follow the axioms, and the maximizing thing takes care of itself". At any rate, I haven't looked back at these parts of Weirich's book in a while, so I might be mistaken here, and this is obviously not relevant to your debate with Doug, and yet I found myself typing this comment...

Interesting. I did not remember Realistic Decision Theory (only one of Weirich's books I have read) that way. I have it on my shelf, but in my office, where I am not.

However, even someone who thinks of utility as primitive, and believes it is a substantive principle that we are to maximize its expectation, can't deny that the maximization takes care of itself if only we will conform to the axioms. I mean, that's a theorem. (In fact it's the theorem, you might say.)

Hi Landon,

I'm not sure. I think, though, that they should accept C1 and reject one of the others, as Jamie did.

Jamie, of course you are right that following the axioms will suffice to maximize utility. I thought you had meant to say that *by definition*, you'll be maximizing utility by following the axioms, since utility is defined as the quantity that is maximized when your preferences conform to the axioms (but perhaps I was reading too much into what you were saying). This is different (I think) from inferring, via the representation theorem, that by conforming your preferences to the axioms you maximize utility.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


Disclaimer

  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.