Some Of Our Books

Categories

« Ethics Discussions at PEA Soup: Jaworska & Tannenbaum's "Person-Rearing Relationships as a Key to Higher Moral Status," with critical précis by Margaret Little and Jake Earl | Main | A Bleg: Normative Requirements regarding Intention »

February 26, 2014

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Hi Chrisoula,

Very interesting. I can't think of anything other than these three. I would have assumed that we should interpret Quinn as accepting (1).

What does Quinn say that doesn't fit with this interpretation?

He says that "better than" is transitive, and that it is not true of each setting that it is better than the previous one. I thus assume he thinks "worse than" is transitive, and that it is not true of each setting that it is worse than the next one.

Good question! Here is a possibility (but I haven't looked back at the Quinn, so I might be completely wrong). Couldn't we say that Quinn takes the lesson of the ST puzzle to be that you cannot read off that X is worse than Y from X is dispreferred to Y even when "worse than" is being read instrumentally (exactly because preferences can be rational but nontransitive, while "worse than" is transitive)? So let us say that ST decides to stop at N before the whole process begins. If this is the case, then, relative to this set of options ( I am understanding 'set of options' aggregatively here, which I don't think it is possible in your third interpretation), N + 1 is worse than N, even though N is dispreferred to N + 1. Had ST chosen to stop at N + 1, than N would be worse than N + 1. This depends on a "pick and stick to it" solution (which I believe Quinn favours), but I think it could be adapted to other types of solution.

Thanks for this - it is of interest to me because I have been thinking a bit lately about money pumps for intransitive preferences.

How about this: what determines the agent's behaviour is a choice function i.e. a function C that takes any set O of options to (the smallest) subset of O s.t. the agent always chooses an element of C(O) if presented with option set O. To say that X is preferred to Y (X > Y) is just to say that X ≠ Y and C ({X, Y}) = {X}. (Given intransitive preferences, it doesn't follow from X > Y that e.g. Y does not belong to C ({X, Y, Z}), though given representability it does.)

Now why not say that X is worse than Y iff X does not belong to C(O) for any O s.t. Y belongs to O. In other words, the agent does not choose X if Y is an available alternative. This captures a sense in which e.g. in the preference cycle A > A-$1 > B > C > A, it might be true that A-$1 is worse than A but false that B is worse than A (say). And it is consistent with the agent's ending up with A-$1 in a money pump, because the agent never faces an option set containing both A and A-$1.

Hi Chrisoula,

Here's what may be a fourth interpretation or, perhaps, is just a version of (1).

(4) To say that X is better than Y, relative to the agent’s preferences, is to say that the agent prefers X to Y. To say that X is worse than Y, relative to the agent’s preferences, is to say that the agent disprefers X to Y. To say that X is better than Y, relative to the agent’s preferences and starting point P, is to say that the extent to which the agent prefers X to P is greater than the extent to which the agent prefer Y to P. And to say that X is worse than Y, relative to the agent’s preferences and starting point P, is to say that the extent to which the agent disprefers X to P is greater than the extent to which the agent disprefers Y to P.

And can't he say all this while maintaining both that worse than and better than (the not-relativized notions) are transitive?

Sergio: Thanks for your comment. Quinn does favor a “pick and stick to it” solution,” but, early in the article, Quinn claims that 1000 is worse than 0 without supposing that the self-torturer has adopted any plan at all. (I’m not quite sure what the relevant set of options is when you say “relative to this set of options,” so maybe I’m misunderstanding your proposal.)

Arif: Thanks for the suggestion. I see how this works. I’ll have to think about whether this sort of explanation is open to Quinn. I’m not sure it is. Quinn thinks that rationality requires the agent to settle on some stopping point N and stick to this, even though the agent continues to prefer N+1 over N (which is why Quinn thinks rationality requires resoluteness). But if preferences are construed in the way you’re suggesting, and the self-torturer always has to make pairwise choices, Quinn would be suggesting that the self-torturer must do something that is impossible (since, according to the proposed construal of preferences, one cannot possibly opt for N while preferring N+1 when faced with the option set {N, N+1}).

Doug: Thanks for the follow-up comment. One might be able to say that “worse than” is transitive even if “worse than, relative to the agent’s preferences” is not; but if instrumental rationality is “a slave to the agent’s preferences,” as Quinn suggests, then the transitivity of “worse than” would be irrelevant to Quinn’s discussion, which concerns instrumental rationality. My understanding of your suggestion is that it would require us to dismiss Quinn’s comment about the transitivity of “worse than” as true but irrelevant; and to see him as accepting but omitting to mention the relevant claim that “worse than, relative to the agent’s preferences” is intransitive.

Hi Chrisoula:
I was thinking that Quinn could be proposing something along these lines: once you pick a point everything above the point is worse than everything below the point. Since we know ahead of time that you are not picking 1000, we can say, independently of any picking, that 1000 is worse than 0 (in fact, this registers the fact that it is not rational, given your preferences, to pick a point so late in the game). This might seem like an artificial regimentation, as it commits him to the claim that also N+1 is worse than 0 if I pick N, but I'm not sure that this is a problem for Quinn, given that he thinks that you should not get to N + 1 anyway.

I forgot to say: "relative to this set of options" just meant to leave open the possibility that if somehow ST got unhooked and then faced the same scenario, it would be perfectly rational for her to choose to stop at a different setting.

Sergio: Thanks for the clarification. Here’s my main worry: You suggest that the self-torturer will not pick 1000 as a stopping point because it’s not rational, given his preferences, to plan to stop at 1000. But then, even if the fact that 1000 is worse than 0 is related to the fact that the self-torturer will not pick 1000 as a stopping point, isn’t the order of explanation: *the self-torturer will not plan to stop at 1000 because 1000 is worse than 0*, not *1000 is worse than 0 because the self-torturer will not plan to stop at 1000*. We're then still left with the question: What makes 1000 worse than 0? (And now we have to keep in mind that your suggestion goes along with the idea that "you cannot read off that X is worse than Y from X is dispreferred to Y even when "worse than" is being read instrumentally.")

Thanks Chrisoula for your helpful reply to me. I'm afraid I don't know the Quinn paper very well, so excuse my naivety. But I was puzzled by your statement that the self-torturer sometimes faces a choice {N, N+1}, at least for any N < 999. Isn't it rather this: after a week at level N, he faces an option set O (N) that (for N < 999) satisfies:

O(N) = {1 more week at N followed by O (N) again, 1 week at N+1 followed by O (N+1)}

And I don't see why the fact that N+1 > N (> means preference) forces any particular choice from O (N) thus specified.

Arif: Thanks for following up. That’s a good point. I’ve glossed over it and so would need to say more; but here’s a more direct route to my view that Quinn will not be able to adopt your construal of preferences: For Quinn, the solution to the puzzle involves recognizing that rationality sometimes requires resoluteness, where this is understood to involve sticking to a plan even though it requires one to act against one’s preferences. Quinn suggests that, having adopted a plan, the self-torturer will at some point be required to stay put rather than proceed even though he prefers proceeding over staying put. Your proposed construal of preferences casts this as impossible.

Thanks Chrisoula - yes, of course you're right about that, though now it is a question what preference is supposed to be. But the definition of 'worse' that I proposed doesn't depend essentially on a behaviouristic construal of preference. Whatever binary preference is, it is a function that takes every two-element option- or outcome-set O to a subset of O that is in some (for Quinn some non-behaviouristic) sense the 'favoured' subset of O. Whatever exactly that means, it presumably can be used to define a function C that takes any option- or outcome-set of arbitrary size to its favoured subset. And then we can define 'X is worse than Y' to mean that X is not in the favoured subset of any set to which Y belongs. So *if* preference is already understood then there may be no further problem with 'worse than'. What you are making me doubt is whether it means anything to say that the self torturer prefers X to Y.

Arif: Interesting. I see what you mean about your definition of worse not depending essentially on a behavioristic construal of preference. I need to think more about this. Let me know if you have any further thoughts in the meantime. Thanks!

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


Disclaimer

  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.