Some Of Our Books

Categories

« Philosophical Trajectories Official Launch | Main | Featured Philosopher: Elizabeth Anderson »

July 18, 2013

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

What are the Proviso’s Provisions?

Gijs van Donselaar

In ’Twenty-Five On’ David Gauthier proves to be more attached to the two compliances he recommends in Morals by Agreement, namely to the Lockean Proviso prior to cooperation and to a subsequent MRC split of the benefits of cooperation than he is attached to the claim that these compliances can be grounded in rational choice orthodoxy. His present departure from orthodoxy was provoked by Rubinstein’s critique of the MRC bargaining result (favoring the Nash bargaining solution in its stead), and by my own doubts about his arguments for the rationality of compliance with the Lockean proviso. Gauthier now offers the Pareto-optimizer, seeking Maximin Relative fulfillment, instead of the (constrained) maximizer as the model of rational agency in cooperation. And as for the Proviso Gauthier now offers the broadened idea that agents who seek mutual benefit will abstain from taking advantage of others whether prior to cooperation or through cooperation or even in the absence of cooperation, but he admits to provide no further argument for the rationality of this abstention, leaving that as a gap within the contractarian enterprise. I will not attempt to close that gap. But I suggest that in order to come up with its justification in the first place we should have a proper understanding of the Proviso’s consequences. And I think such an understanding is lacking. So here is a gap too.

Why is Gauthier attracted to the proviso? In Morals by Agreement Gauthier argues that the Proviso establishes what each of us brings the market; the result of the market will therefore not reflect prior predatory or otherwise violent actions among us, or deceit, it will merely reflect the value of the personal talents that we have brought, and therefore the market’s outcome is morally innocent, indeed may be thought to pass the ‘contractarian test’ (as Gauthier calls it in ‘Twenty-Five On’). And indeed any form of redistribution, e.g. from rich to poor, may appear to be a violation of the Proviso. Gauthier still seems to think so, witness his example of Robin, the poor of Sherwood Forest, and the terribly exploited Sherriff of Nottingham (that I will discuss later). But is this what follows from the Proviso?

As I argued elsewhere, the Proviso establishes property rights to our own talents and capacities, but it has not much to say about the distribution of production factors other than talents; and what little it has to say, I now think, may rather favor redistribution than not. In order to judge if our cooperative arrangements pass the contractarian test we must at some point envisage ourselves as disentangled from those arrangements. Our lives will then not appear as grim as Hobbes depicts them (S, P, N, B & S), because the proviso precludes that we resort to our best non-cooperative (predatory and defensive) strategies. But how will our lives look? We would end up how? As the king of a large and fruitful territory in America? All of us? Unlikely. But even if so, John Locke presumes that we would then race back to our present positions as a day laborers (all of us?), whereas Thomas Paine thinks that our lives would then be a continual holiday, much to be preferred to the fate of the poor in the industrial age. Or should we think of ourselves as independent hunter-gatherers, or farmers, or whatever, prior to cooperation? I think the Proviso, whatever it provides, provides no clue to this question. Given the relevant counterfactuals the Proviso suffers a serious epistemic disability.

Should it then be dropped as a guide to our judgments? Not quite. Because this is the little we can say: in the absence of cooperation but constrained by the Proviso, no person, and no coalition of persons, or ‘nation’, can be expected to end up as the exclusive possessor of an oil field, a diamond mine, as large landowners, or as Sheriffs of ‘estates’ – if only because the exclusive possession of such resources cannot be profitable unless their usufruct can be sold or hired out to those who do not possess them: the value of such extensive possessions in resources presupposes a market; it presupposes demand by the dispossessed. We, however, live inescapably under cooperative arrangements allowing some people to be such big possessors of resources while others have to participate as the mere contributors of their labor power. Do such arrangements, then, pass the contractarian test, and would any form of market regulation or of redistribution, e.g. from the propertied to the dispossessed, violate the Lockean Proviso? That is not at all obvious to me (and even Nozick admits that Fourier may have a point).

Nor do I understand why Gauthier would insist that Robin’s redistributive activities would violate the Proviso in that particular case that the Sherriff of Nottingham had first excluded the inhabitants of Sherwood Forest from all the fertile grounds around, keeping all proceeds of the harvests to himself. Having the Sherriff out of their way would have allowed the inhabitants of Sherwood easy access to a harvest of their own, and its proceeds, and good Robin (I assume he models the ‘Robber-State’) merely repairs their disadvantage to a certain degree: in no way would the Sherriff be instrumental to the gains Robin allows them.

Now, I do not think that in the face of these sorts of considerations, the Proviso would warrant that we swing all the way back to the idea that the external means of production should be possessed collectively, and I have argued elsewhere that a so-called Unconditional Basic Income for all, being the equivalent of an equal split of the value of all external resources, would indeed itself constitute a violation of the proviso, transferring benefits from people who work hard to those who prefer to surf off Malibu. But in-between such radical proposals and radical anarcho-capitalism, we should identify the non-parasitic social arrangements that are nonetheless sensitive to the problem of private property rights in external productive resources. If we succeed we may hope that such arrangements prove to be rationally acceptable to Pareto-optimizers. As I said, that job has not been done yet, and it will not be easy.

I have two, related, questions about the entire Moral by Agreement project 25 years on.

1. I would like to know from David Gauthier what he takes to be the role of the social contract for his contractarian project. Is it “merely” a hermeneutical device intended to determine the content of the moral rules (e.g., social rules are ‘tested’ against the outcome of rational bargaining and if they could be the result of such bargaining, then they are ‘approved’)? Or is the social contract supposed to do more: is it also grounding the ‘normativity’ of these moral rules. That is, the fact that rational co-operators agree on a set of rules is what makes the rules obligatory or authoritative for us. If the former, it seems that the contractarian project, just like the rawlsian contractualist project could do without the social contract metaphore as it is just a device. If the latter, the contractarian will have to do some work to dispel the worry of some kind of questionable voluntarism: for what is it about rational co-operators and their decisions that we have to do as they agreed? You could make a Euthyphro of this: if it is already rational/reasonable to do what rational co-operators agree upon, then the fact of their agreement (actual or counter-factual) does not make this course of action normative. It already was. On the other hand, if the course of action that rational co-operators agree upon is irrational/unreasonable, the fact that they agree does not make it rational/reasonable.

The only way to avoid the dilemma, it seems to me, is to argue that it is the agreement of the rational co-operators that makes a set of rules normative, but that there are limits as to what rational co-operators ought to agree to. This, then, raises questions about the normativity of those limits.

2. The second question I have for the Gauthier 25 years on is informative but related. Does the social contract result in a determinate, definite set of moral norms (or norms of justice), or is there still room for local variation. For if the latter, that would help explain why one would think that the social contract is necessary in the theory (i.e., not just as a hermeneutical device for testing or discovering the norms of justice, but as a way of grounding the norms of justice).

A comment on the two questions Bruno raises.

I have long thought that the social contract or hypothetical agreement was heuristic, a discovery procedure for determining what set of principles or norms governing the distribution of assets satisfied certain conditions (e.g., mutual advantage). I have never seen how it could be more than heuristic. David Gauthier disagrees with me here (as does Rawls in TJ):

"I have no objection to the claim that only real contracts bind, as a
general thesis about contractual justification. But the force of the social
contract is not found simply in its being an agreement. Rather its force
lies in its being the nearest approximation to an agreement in a context
in which literal agreement is not possible but would be desirable. We
cannot literally choose the terms of our interaction, but we can determine what terms we would rationally choose, from an ex ante standpoint
that does not privilege the actual course that our interaction has taken. In
this way we bring society as close as is possible to the status of a voluntary
association. The objection that the test involves only hypothetical agreement has matters the wrong way round. Actual agreement would not
show that the terms agreed to were rational, since it privileges existing
circumstances. The contractarian test, in taking the ex ante perspective,
removes that privilege."

Hypothetical choice (or agreement) is not choice (or agreement). I think that the argument that the content of hypothetical choice is to be identified with the content of morality (better, of the virtue of justice) is a separate argument. In the book it's not as clear as it might be what that argument is. I think it's a complicated inference-to-the-best-explanation argument, but I am not sure.

On the question of the normativity of the principles of justice, there is an ambiguity here. One question is what makes these binding norms of morality (or justice). Another is how is it that we have reasons for action (of a certain kind) to comply with these norms. The latter question, to my mind, is answered by seeing how the reasons we initially have give rise to new reasons. (We might have prior reasons to brush our teeth and to be resolute in some of our choices [cf. McClennen]. These reasons are prior to the social contract.) The former question, about the thesis that these principles are principles of morality or justice, is answered by the inference-to-the-best-explanation argument.

Lastly, Gil Harman and Jim Buchanan in the late 80's expressed scepticism that this kind of constructivist project could come up with determinate principles or norms. I agree. Just reflect on the question of who is a party to the social contract and why must there be one social contract rather than a series of different, overlapping one.

A comment on a remark in Susan Dimock's excellent note. She says:

"Perhaps the greatest virtue of Gauthier’s rational choice contractarianism is revealed in this simple thought: if everyone acted morally (consistently with the demands of social morality and justice), everyone would be better off. If morality really is authoritative, if morality really does provide persons with reasons (justificatory as well as explanatory) to act in conformity with its injunctions, then it must be because everyone would be better off following them."

A lot of moral thinkers won't agree of course as they don't think that morality or justice have this relation to well-being. I just want to note that even though the book uses a particular subjectivist account of our interests or, rather, our reasons for action, contractarians need not be wedded to that. It's not just that we need not assume as David does in 1986 that agents are self-interested or "non-tuistic" for the purposes of theory construction. We don't even need to assume that they take their subjective preferences as their guide. We could use the utility functions of the theory of rational choice as place-holders for the best account of value and reasons for action, and this might be Aristotelian or realist. The only crucial assumption is that "morality" or, better, justice is to be constructed from elements that are prior to it. The latter may include facts about our wellbeing and preferences, but they need not look like the preferences of MA>

Chris,

I take your point that the follower of Gauthier need not be wedded to a subjectivist account of well-being. But if one leaves behind the idea that one has most reason to pursue one's own good or one's own informed desires, how do we re-establish the purported advantage Dimoch claims for Gauthier over Kantian and consequentialist rivals--namely that Gauthier does not ask us to sacrifice for the sake of others? Perhaps this advantage would then have to change to the claim that no one has to do what they have less reason to do on Gauthier's view? But won't many other views, once they help themselves to their own favored account of reasons for action, have this feature? Or perhaps, on your view, this claimed advantage of Gauthier cannot be sustained?

David Sobel:

A few things:

- part of the interest of morals by agreement is its attempt to construct an account of morality (or justice) from elements that are supposed to be non-moral (or non-just) and, unlike Rawlsian rivals, to do so in ways that show that everyone or virtually everyone has reasons to be moral, and reasons of the right sort. Its success in achieving these goals is not threatened by allowing the reader to substitute a better account of reasons for action, unless of course that this better account begs the question or is in some way inappropriate for the project.

- The advantage morals by agreement is supposed to have over its Kantian rivals is twofold: a more plausible account of practical rationality, and a non-question begging one. So Kantian rivals can say that on their view "no one has to do what they have less reason to do". But the contractarian thinks their view is both implausible and may be question-begging in the ways that defeat the project of generating reasons to be moral. How utlitarian views are defeated depends on their foundations. Not suprisingly, Gauthier was, like Rawls, preoccupied to challenge Harsanyi's theorem. (I find his criticisms powerful.)

- a small point: morals by agreement can only hope to show that each of us is better off ex ante; ex post we may die in the trenches. Morality is a genuine constraint for Gauthier, so we may lose in the end.

- What it the project an attempt to do? To determine what the content of (part of) morality is, and to show that we have reasons (of the right sort) to act morally (or justly). Whether it succeeds or rival accounts succeed depends in part on the correctness of the account of practical rationality. One thought is that the contractarian's rivals go wrong in their understanding of rationality. Gauthier now has some doubts about the received decision-theoretic account of rationality, so I don't want to focus our attention on certain details of the book. Instead I think it important to understand the constructivist project and to evaluate it using the most plausible account of rationality. If we think that the most plausible account yields the result that we are acquainted with general norms of justice (e.g., a Thomistic natural law account), then the constructivist project is not needed.

Long-winded but I hope to the point and not too opaque.

C

I just re-read my questions on Gauthier's 25-on and I see that I forgot to say the most important thing. I did not express my admiration for both the book and the great essay that summarizes the development os his thinking since then.
Gauthier's project in Morals by Agreement was a real eye-opener to me in that it combined rational choice theory -- imho the best expression of the idea of practical rationality -- with questions about justice and morality that always seem to be at luggerheads with rationality. Gauthier did not completely convince me -- Gijs gives some good reasons why, Gauthier himself in later work as well -- but that does not take away the admiration I have for the project.

Chris,

In response to your response to my question about that status of the social contract for Gauthier, the passage that you quote is exactly the one that got me thinking. It seems that for Gauthier the (counterfactual) fact that rational cooperators agree is what makes these rules have authority (i.e., give reasons to those subjected to them). And that seems problematic for reasons I pointed out above.

However, I am a bit puzzled as to why you (used to?) think that the social contract is nothing but a device to determine the *content* of justice. In that case, how could Gauthier's project also show that "...we have reasons (of the right sort) to act morally (or justly)..." It could only do that if the (counterfactual) fact of agreement provided reasons in and of itself.

What am I missing?

B'no,

Let there be a number of alternative sets of norms governing what is owed to whom. Some may be the norms of our society, others of our sub-group, and yet others are the outcome of constrained hypothetical choice among all the members, say, of our society or all agents in the world. Gauthier will identify morality (or justice) with the last set. David Hume will identify justice with the first two sets (and Gil Harman will do the same for morality). What is the case for Gauthier's alternative? I'm not certain; on some days of the week I am on Hume and Harman's side of this debate. (I should have said that Jim Buchanan is also on this side.)

David thinks that choice has something to do with the privileged status of the last set of norms, perhaps for different reasons than Rawls. Part of his view may turn on some of what he says in the ch. of Morals by Agreement entitled "The Archimedean Point". He there contrasts his view with those of Rawls and Harsanyi. What he needs, however, is not quite what you say in your last para. above. He needs to show that the third set of norms is privileged and can be identified with morality or justice. Once he has does this, assuming something like the revisionist account he rationality he has developed, then it will follow that we have reasons to be moral or just. Or so I think.

Imagine a contractarian argument for a reason to obey the laws of a particular legal system. It's hard, given what we now know (commentary on Hobbes; the work of Hart, Raz, Simmons, L. Green), believing that such an argument could succeed, but that's another matter. Take Raz on legal authority. He has an account of rationality which makes sense of rational constraint. That's what's needed to show that particular laws have authority. Similarly with Gauthier, the crucial element in the argument for the authority of the norms of morality or justice is the account of rationality.

The last para. may not be intelligible.

C

David (Gauthier):

Here you replace "the constrained maximizers of MbA [with] "rational cooperators" who "take their reasons for acting from considerations of fair Pareto-optimality, rather than maximization." Do you see any reason to adjust your appeal to constrained maximization in toxin cases? If not, are cases of rational cooperation and toxin cases less similar than they seem in "Assure and Threaten"?

In "Twenty-Five On," David Gauthier rejects his concept of constrained maximization as an account of practical reasoning in favor of a new revisionist account of practical reasoning that, in contrast to orthodox rationality, rejects the ideal of expected individual utility maximization and assumes that the Pareto-optimality condition takes priority over the equilibrium condition. Gauthier writes that such...

"[a] Pareto-optimizing account of rational choice ascribes to each person the capacity to coordinate her actions with those of her fellows, and to do so voluntarily, without coercion. It treats the exercise of this capacity as rational, when the person sees the outcome of coordination as reasonably efficient, so that no significant possible benefit is left unrealized, and reasonably fair, in that no one can reasonably complain that her concerns were not taken sufficiently into account, in determining the outcome to be achieved by coordination." (608)

According to Gauthier's new account of practical reasoning, members of society, apart from being rational, must be fair minded. They must have an interest in reaching reasonably efficient and fair distributional outcomes and they must consider their personal good in the selection of such outcomes in the same way as they consider the good of others (609). De facto, members of society are assumed to consider each other as moral equals (a core assumption of Rawlsian contractualism), and not merely as being (roughly) equal by nature (a core assumption of Hobbesian contractarianism).

Based upon his new revisionist account of practical reasoning and a newly introduced contractarian test, Gauthier justifies his original bargaining principle under the new name, principle of maximin proportionate gain. Gauthier's contractarian test represents a necessary condition for the justification of social institutions. According to this test, social institutions are justified only if they "...could be reasonably agreed to by individuals choosing their social structure from an ex ante perspective." (619)

Gauthier's contractarian test seems to be inspired by Rawls' concept of 'justice as fairness' that captures the idea that, in order for terms of social cooperation to be just, agents must agree on such terms before they enter society, although Gauthier (unlike Rawls) does not employ a veil of ignorance in order to justify his principle of maximin proportionate gain as a principle of distributive justice.

From a methodological point of view, it seems that over the years Gauthier has shifted from defending a version of Hobbesian contractarianism in Morals by Agreement to defending now a more Rawlsian-inspired contractualist position. I would be interested in Gauthier's self-understanding of the development of his theory expressed in these terms, if the crude labels of contractarianism and contractualism are considered to be useful. Is it fair to say that Gauthier's new moral theory is closer to Rawls' contractualism than it is to Hobbes' contractarianism and the project of orthodox rational choice contractarianism to derive moral conclusions on the grounds of non-moral premises alone?

Hi Chrisoula,

Why would you think that the move from Constrained Maximizers to "rational cooperators" have an impact on Gauthier's response to the toxin puzzle?

I'm still trying to pinpoint my concern. I guess I think something important to the spirit of "Assure and Threaten" is lost if the very strong similarity in Gauthier's responses to toxin cases and cases of rational cooperation is weakened. The similarity somehow made morality seem like just another case in which rational constraint is called for, for the same reasons it can be called for in intrapersonal cases. Now I'm wondering whether, as Michael suggests, "Gauthier's new moral theory is closer to Rawls' contractualism than it is to Hobbes' contractarianism and the project of orthodox rational choice contractarianism to derive moral conclusions on the grounds of non-moral premises alone."

Sorry if my introduction of the toxin case is still cryptic, particularly since I'll be out of touch for the next week and a bit :(
But I'll check back in later this month.
C

The Reversion Problem (Redux)

Duncan MacIntosh

David (Gauthier), it is a true delight to be able to engage with your most recent work; and to have occasion to reconsider some of the issues we’ve been discussing recently.

I want here to moot with you an issue I’ve pursued you about over the years, the one we’ve discussed recently in correspondence. I’d hoped to be able to use this occasion to concede some things to you, in keeping with the direction of my recent e-mails to you. And I will concede some things. But on the main thing, I’ve been duck-rabbiting, seeing it one way, then changing my mind, then changing my mind back again. So I’ll just say what I’m thinking here and let you respond if you’re so inclined.

Before your revisionist theories of practical rationality, it was generally thought by decision theorists that actions are rational if maximizing, intentions, if intentions to do maximizing actions. Since it is not maximizing to act on assurances of co-operation in one-shot Prisoner’s Dilemmas (PDs), nor on threats of retaliation in Deterrence Paradoxes (DPs), rational agents cannot acquire the benefits mutual co-operation would yield in PDs, and cannot deter in DPs.

In MBA and before, I took you to be saying that actions are rational if rationally intended, and intentions are rational if it maximizes going forward to adopt them. I took you to be thinking it would be maximizing to have intentions to co-operate in one-shot, multi-stage PDs (PDs in which, first, you choose a basis for further choice, then other agents ascertain what your basis is, then they choose their actions, then you choose your actions). This would supposedly make it rational to adopt such intentions (because that would supposedly induce others to co-operate), and to act upon them (since it was maximizing to adopt them). Likewise for intentions to retaliate in DPs.

I objected that if what makes an intention rational is that it maximizes to have it going forward, then after others assess one’s choice basis, it will be maximizing and so rational for one to replace one’s co-operative intention with an intention to defect, with the result that Gauthier-rational agents would defect in PDs. Similarly for DPs – the intention to retaliate would be maximizing to have at the threatening stage, but not when deciding whether to act on the threat. So after the threat has failed and an attack has occurred, Gauthier agents would form and act on the intention to refrain from retaliating. Of course everyone would know all of this, so it would be rationally impossible and/or pointless to go through the epicycle of temporarily adopting intentions to co-operate or retaliate. In my contribution to the recent symposium in _Ethics_ on your work, I called this The Reversion Problem.

What’s causing the problem is that intentions are continually changing in whether having them is maximizing as the games unfold; and in the end-stage of the games, the stage where one’s intentions can no longer affect the behaviour of other agents, the intentions maximizing to have would be intentions to do maximizing actions.

I proposed to repair your view by suggesting that rational agents should in the first instance subject not their intentions or their actions to deliberation, but their preferences: it maximizes for one to adopt going forward preferences maximization on which would in turn require one to adopt and act upon intentions to co-operate in PDs and to retaliate in DPs with agents expected to be suitably influenced by one’s forming desires that would induce one to do these things. Rational agents are always to do actions maximizing on the desires rational for them to have, and to intend to do just such actions; and one’s desires continue to be rational only if changing them wouldn’t maximize on them.

More on this in a moment, but now, back to you. I gather that either you never thought that intentions are rational only if having them is maximizing going forward, or you have come to think something different makes intentions rational. In the period between MBA and your article, “Assure and Threaten”, and possibly before, perhaps you had the view that intentions are rational just if forming them complies with the rule for choosing intentions and actions one would have found it maximizing to adopt at the beginning of one’s life to govern one’s choices throughout one’s life. And that, of course, would have been whatever rule would require one to adopt and act on the intentions associated with assuring and threatening.

Let me pause here and say that, if I misunderstood your earlier view, I’m happy now to stand corrected on what it was, and to admit that the view you really had is harder to criticize. While if you used to have that view, but haven’t had it for a while, I’m happy now to explore the more recent view, and see how well it does.

OK, so I’ve been tempted now to agree that there could be a version of your view that would not have a reversion problem. We would have to be careful, of course. For if the view is that one should adopt the principle that one should adopt and act upon intentions it maximizes going forward to adopt – going forward from each moment of possible intention adoption -- then we will have a reversion problem, since intentions will still be chosen by their (ever-changing) effects; and we’d just have another useless epicycle to a useless epicycle. But if the view is that one should, for example, adopt a principle saying to adopt and act upon intentions that consist in issuing assurances and threats, then one might think that there will be no reversion problem. For that principle will always be the one it would have maximized to adopt at the start of one’s life; and it will always require one to follow through with assurances and threats (unless, of course, one discovers that one was wrong about the conditions under which one issued them).

Right, so I was going to concede that you are correct that the idea that one should make choices from the rule it would have advantaged you to adopt at the start of your life to make choices from throughout your life, would not have a reversion problem.

But on reflection, I’m not so sure. Remember that the initial puzzle was to explain how it can be rational to make advantageous commitments to the doing of disadvantageous actions. On the standard view it cannot be rational. On what I took to be your earlier view that, really, rationality requires one to form and fulfill intentions maximizing going forward to form, it turns out it cannot be rational either, because of the reversion problem. Most recently we have from you the idea that, really, rationality requires one to choose from the life-rule it would have been maximizing to adopt at the beginning of one’s life. Is it true that this will save us from reversion? I wonder. For here I am in the middle of my life. I recognize that if I were now to commit to following the life-rule, my life would go better. So I commit to following the life-rule. We imagine that this makes others trust me, and that they then co-operate with me. Now I’m trying to decide whether to return their co-operation. The life-rule says I should do this. But of course, if I were now to renounce the life rule, I’d do better still. And given that the only conceivable argument for me adopting the life-rule was that doing so would be maximizing for me, and since now I’d do even better by adopting instead a rule saying to break the life-rule, surely I should adopt that rule, and then do what it says, namely, break the life-rule and defect. So the problem of how it can be rational to form and fulfill advantageous first-order commitments to disadvantageous first-order actions has now become the problem how can it be rational to form and fulfill advantageous second-order commitments to keeping to life-rules (governing which first-order commitments to form and fulfill) it is advantageous to commit to keeping to, but not advantageous to actually keep to. We now have a reversion problem, except at the meta-level. The problem is not with the rule itself – it really does always say to form and fulfill assurances and threats (unlike the rule which says to form and fulfill intentions it maximizes going forward to form). The problem is with the rationale for adopting and acting on the life-rule: the rationale is the advantage of adopting the rule. But what that sort of rationale recommends changes as one moves through the stages of PDs and DPs.

Notice that since the problem is not with the rule, but with the rationale for adopting and keeping to it, the problem will re-arise for your more recent views that it is rational to adopt and act on intentions one expects acting on which to leave one better off than had one never adopted them (since the intentions with this property change after others have assessed one’s choice basis), and that it is rational to adopt and act on intentions to do one’s part in bringing about mutually fulfilling outcomes with those inclined to do likewise (since, again, whether it is advantageous to have this as one's choice basis changes after others have assessed one’s choice basis).

You might be inclined to reply that, in “Twenty-Five On”, you are not saying that people should adopt any of these principles for the advantage of having them, and are instead saying that it is just a primitive fact about rationality that people should always adopt and act on intentions to help bring about mutually fulfilling outcomes. But if this is what you are saying, that would mean that Susan Dimock’s wonderfully elegant and concise reconstruction of your views misrepresents your most recent view. It would also mean that you are offering your principle of rational choice with absolutely no argument.

So I’m tempted to say your solution won’t work, and to re-offer my preference-revision solution, a solution which can take people as they are and offer a motivation for them choosing differently going forward. However implausible the preference-revision alternative seems (and I admit it faces hurdles of its own), at least it won’t obviously face a reversion problem.

What do you think?

In Morals by Agreement, Gauthier spends quite some time answering Hobbes's Foole, and he claims that a successful answer to the Foole is needed for his theory to succeed. It seems to me that answering the Foole has become less of a priority for Gauthier given that the new constrained maximization, agreed Pareto-optimization, is no longer justified as being expected-utility maximizing. Still, also on the new approach the Foole's question can be asked, and I wonder how it would be answered. Gauthier writes that "rational cooperators need not seek a collective or substantively common good. Each is concerned to realize his own good, as expressed by his utility function." It seems to me that a rational cooperator may yet come to reflect on her 'disposition', and wonder whether she realizing her own good as effectively as she could. Say now that such a rational cooperator, finding herself to be surrounded by other rational cooperators with less than excellent mindreading skills, would after an episode of reflection come to think she will do better for herself by becoming disposed to covertly violate certain agreements. What type of mistake would this person be making if she now came to believe that she has reason to indeed violate now and then? Would we have to say that she is mistaken to think that she will be better off by doing so, as Gauthier did in MbA, or would we now say that, even if she would be correct to think she can realize her own good better by violating certain agreements now and then, she would be irrational to do so?

Why Be Rational?

Duncan MacIntosh

Hello David, here’s me duck-rabbiting again.

I think I can guess what your reply to my “Reversion Problem (Redux)” post would be (and it’s kind of along the lines of some of the things Peter Timmerman just said, as are my concerns about the reply):

You will probably say that while your project in MBA was to persuade a straightforward maximizer to become a constrained maximizer (which you sought to do by pointing out that it maximizes for her to become someone who, going forward, constrains herself from maximizing), your new project is merely to identify what rationality is. And what it is, is following whatever principle of choice is such that following it for one’s life makes one’s life go best by the measure of one’s preference function. You conjecture that this principle will say, roughly, to do one’s part in bringing about mutually fulfilling outcomes with others so inclined. But you are offering this as an analysis of what being rational is, not as suasion to a straightforward maximizer to change her ways. As Peter says, the new project is not to give an answer to the Foole. Although I suspect you can give the Foole an answer if you’re debating with the Foole in hypothetical space at the beginning of your lives about what life-rule to adopt – at that logical moment, you do have a reply to the Foole, namely, that his proposed life-rule will have him do less well than yours.

Meanwhile, you do in fact give arguments for the correctness of your analysis, e.g, the argument that being rational should be expected, ab initio, to make one’s life go best, and that the rule advising seeking fair, Pareto optimal outcomes while otherwise not violating the proviso, would be best expected have this effect. Read in this way, Susan’s reconstruction is accurate to you, and I was wrong to say you offer your principle with no argument.

This is very lovely.

It does, however, change the philosophical landscape somewhat. After all, the big puzzle before you was, how can it be rational to be moral? The new puzzle after you is, why be rational? (As asked by the Foole in the middle of her life wondering what to do going forward.)

Until my next flip-flop. (This is too easy for you, David: all you have to do is enjoy your coffee while I spin in circles ventriloquizing conjectural versions of you!)

Duncan.

Having It All

Duncan MacIntosh

David, it occurs to me that the preference-revision approach I propose would let you have it all. We can say to the Foole that his preferences would be better advanced, beginning right now, by him changing his preferences (for minimum jail time in the PD, for minimum harms in the DP) into preferences the having of which would make it maximizing for him to co-operate and retaliate respectively (e.g., into preferences to keep commitments it was advantageous for him to make, and to fulfill threats it maximized his expected utility for him to sincerely issue). We get to say that being rational should be expected to make one’s life go better by the measure of one’s preferences. We can give an answer to the question, why be rational? Namely, that being rational will best advance one’s preferences. (There’s still the question, of course, why seek preference-advancement? – a very large issue in its own right, but one we both face.) And we get to support your conjectures about the content of rationality. For the preferences we would advise the Foole to adopt would be preferences action on which would likely be in compliance with the principles of choice you recommend. (I’m setting aside the debate about whether rationality would advise us to adopt and act on intentions it maximizes to adopt, or to adopt and act on intentions we expect acting on which would leave us better off than had we never adopted them, or to adopt intentions to bring about mutually fulfilling outcomes, etc. That’s another debate. I’ve picked the first option here just to have something to work with while making other points.)

I find it fascinating that the preference-revision approach would net out to recommending the same things as the life-rule approach. They seem to be functionally equivalent. And yet they sound completely different: one has agents always making preference-advancing choices (although, later, from changed preferences), the other, “counter-preferential” choices, at least reading “preferential” choices as ones designed only to advance only the preferences of the choosing agent, not the preferences of both the choosing agent and of those agents with whom she interacts (something which happens only as “collateral damage” on the preference-revision approach). I’m guessing the reason these approaches wind up lining up is that, on the preference-revision approach, one is in effect advised to come to prefer to do what you advise, and what you advise in effect makes each agent a co-operator with others in bringing about mutually fulfilling outcomes (leaving aside, for now, the matter of threats).

Duncan.

Yet Another Reply To The Foole

Duncan MacIntosh

David, I suppose yet another reply you might now have to the Foole is this: just as your contractarian test will evaluate laws, institutions, ways of distributing goods, and so on, by asking whether we would have chosen these things going in – this is your theory of what it is for these sorts of arrangements to be justified – so might you say that the way we evaluate whether we have reason to, say, keep a commitment, is by asking whether keeping it is concordant with the life-rule we would have adopted going in – this is your theory of what it is to have a reason to do an action. So when the Foole says she has reason to make and then break a promise because breaking it would be to her advantage, you would say to her that she is wrong in her theory of what counts as a good reason. And your proof that she is wrong is that she is in effect proposing to follow the life-rule, make promises it is to your advantage to make, break promises when it is to your advantage to break them, but there is a life-rule her following of which would have had her life go even better in her own terms, namely, make promises to do one’s part in bringing about mutually fulfilling outcomes with those similarly inclined, keep promises just when you expect keeping them will bring about such outcomes, in part because you still expect that others are similarly inclined. (No need of preference-revision.) Her proposed rule will fail her because others won’t co-operate with – she won’t be able to get her promises taken seriously. And the rational person finds her reasons now in the life-rules she would have adopted in the hypothetical life-planning stage.

Duncan.

P.S. I once wrote a paper entitled “Two Gauthiers?”. Maybe I need to write another one called, say “Six Gauthiers?” (just to leave room for unanticipated variances).

P.P.S. A plague upon the inventor of live blogs – they encourage insufficiently considered speech.

D.M.

Prof. Gauthier, I write as someone who deeply sympathizes with the contractarian project in MbA, specifically that of reconstructing morality on a non-moral but rational justificatory basis. I agree too that standard rational choice theory cannot get the job done and ought to be revised, though I disagree with the specific normative conclusions derived from your favored justificatory basis in MbA (nontuistic constrained maximization).

Since others above have covered many of the general questions I am curious about, I will raise some issues that puzzle me regarding the rejection of the Nash equilibrium in "25 On".

(1) The following seems to provide one explanation why Pareto-optimizing cooperators do not opt for the equilibrium outcome in a PD. Namely, "cooperators are payoff oriented rather than strategy oriented. They ask which of the feasible joint payoffs it would be reasonable for them to accept.... They do not ask which of his feasible strategies each should accept, given what he expects the others to do...." (p.609)

In other words (this might be an uncharitable reading, but I offer it to seek your clarification), the moves made by Pareto-optimizing cooperators are determined by the outcome they find reasonable to accept, instead of it being the other way around, i.e., instead of the outcome being determined by the moves that they find reasonable to make. But this seems to either misrepresent the PD as a cooperative game rather than as the non-cooperative game that it really is (perhaps assimilating the compliance problem represented by the PD with the bargaining problem that precedes it), or it gives short shrift to the strategic nature of the compliance problem by stipulating that agents fail to adjust their moves at some point in response to the anticipated moves of others.

To elaborate on the last point, since the resulting joint payoffs depend on what moves agents make in strategic games, the responses of rational agents would be mutually adjustive, and it seems to me that focusing on equilibrium sets of strategies takes this into account in an appropriate way. An equilibrium set of strategies is a natural place for such mutual adjustments to come to rest, since no one has a reason to adjust their strategy given the choices of others. In the 2-person PD, it seems that each player has a (maximizing) reason to deviate from the Pareto optimal outcome where both cooperate, so that outcome is unstable in terms of the mutually adjustive moves that the players will make.

(2) You will of course point out that the equilibrium outcome in the PD is suboptimal, so the players do worse by their own lights. So this seems to provide another explanation why Pareto-optimizing cooperators do not opt for the equilibrium outcome. But I wonder what you think about a slightly different compliance problem where the equilibrium outcome is not suboptimal. Here may be a possible game to represent it:

(row chooser's 3rd, column chooser's 4th) (row chooser's 1st, column chooser's 3rd)
(row chooser's 4th, column chooser's 1st) (row chooser's 2nd, column chooser's 2nd)

It's the same as the PD, except that the column chooser prefers cooperation over defection when the row chooser defects. The top left outcome where both defect is no longer an equilibrium, and the top right one where the row-chooser defects and the column-chooser cooperates is now an equilibrium outcome that is also Pareto-optimal. The bottom right outcome where both cooperate according to the terms of their agreement is Pareto optimal, but the row chooser can profit by defecting.

(I assume that some concrete situation can be imagined that makes my interpretation of the above game plausible. E.g., column-chooser may be one of many in a cooperative scheme who desperately need their daily allotted share of the monetary surplus to buy their three meals per day, and the row-chooser may be one who is financially secure. The players prefer leisure to work, and work to going hungry. The row-chooser's work contributes more to the surplus than the column-chooser, and he is considering whether to skip a day's work. If the row-chooser doesn't skip, the column chooser can get her three meals even if she skips work. But if the row-chooser skips work, the column chooser will have to work in order to get her three meals per day.)

Here, if you are inclined to say that the row chooser rationally ought not to "defect", then the explanation or justification for this cannot be that the equilibrium outcome is suboptimal.

So, I wonder if there is some other explanation or justification available for rejecting the equilibrium outcome.

I just read Michael Moehler's comment and it makes me wonder what, in the end, is the real difference between the contractarian approach and the contractualist.
Both have a revisionary account of rationality and rational bargaining (Rawls just as much as the old Gauthier in MbA).
Both seem less concerned with showing that people have reason to obey the requirements of morality/justice than with determining the content of morality/justice (Rawls, but also the Gauthier of "Friends, Reasons and Morals").
Both employ a counterfactual agreement (i.e., not a real agreement or contract) to do this.
Both present themselves as constructivist theories of morality, but it is clear that the constructivism is only 'local' as David Enoch puts it. That is, it only makes sense against the background of a subjectivist or realist theory of reasons (see the exchange between David Sobel and Chris Morris above).

So are there any truly essential differences left?

For Duncan MacIntosh,
I know that you have been suggesting the preference revision theory as a neat way out of the questionable aspects of Gauthier's revision of rational choice theory. If I understood you correctly (big 'if'), what you are proposing is a version of the following idea: Suppose that I have preferences that make me form intentions and perform actions that are such that in my current environment I have less preference satisfaction than I would have, if I had other preferences. I then revise my preferences such that I form intentions and perform actions that are such that in my current environment I have more preference satisfaction both in term of my new preferences and my old preferences. And, so you submit, this is rational.
Did I understand that correctly?
I am assuming that there need not be an intrinsic preference for preference satisfaction that I always have, i.e., that this is a preference I will have before and after preference revision. More precise: preference satisfaction should be understood as just that -- the world is such that my preference is realized (i.e., preference satisfaction need not be a fuzzy feeling in my head, but simply is a state of affairs, namely the state that the world is as I preferred it to be).
Here is the problem. Suppose I really like Champagne. Unfortunately, I live in an environment where people around me copy my pursuits and tastes. I am a fashion setter. My love for Champagne inspires all others to crave the stuff as well. As a result champagne is really expensive and I can have only one magnum per week (...).
If I would revise my preference for Champagne and instead come to prefer Beer, I would be able to get a crate of beer each week AND because of my environment of dedicated followers of alcoholic fashion, Champagne would be much cheaper so I could afford two magnums per week.
Is this a reason to teach yourself to prefer beer to Champagne?
No it is not. Your prefer Champagne, no matter what other people prefer. You think that stuff is really good, excellent taste, elegant, what have you. All the things that that despicable brew called beer does not have. So ex ante there is no reason to change. Ex post, there is no reason to be happy either. True, you would be able to afford more Champagne but you would not crave it anymore. You would want to get beer. The fact that you could get more Champagne is totally irrelevant for you. So how could there be a reason to revise your preferences? There can only be such a reason if there is another, constant and unrevised preference that stays the same and is better served by the revision. For example, your preference for alcoholic intake, or the preference for satisfaction (the fuzzy feeling in the head kind). But you may or may not have such a preference and I thought that the argument was supposed to be independent of the existence of such a preference.
Now, if the argument doesn't hold for champagne and beer, why would it hold for maximizing and constrained cooperation?

What am I missing?

Just a reminder on procedure for this discussion: David Gauthier will respond to questions submitted by July 20 in a few days. There will then be a second round of questions. Thanks!

Bruno,

I think I follow both Duncan’s proposal and your question for him. I think that an advocate of a preference-satisfaction account of well-being has an answer. In your scenario, it’s true that you don’t have any reason ex ante *based on your preference for champagne* to change to a preference for beer. But you do have a reason based on the promotion of your well-being, as measured by preference satisfaction (whatever those preferences are), to change to a preference for beer. This may lead to satisfied fool vs. dissatisfied Socrates questions, but those are other questions for a preference satisfaction account.

I thought that you were going to bring up a reversion problem. Or maybe it’s not a reversion problem in Duncan’s sense so much as an instability problem: If you prefer champagne to beer, then (trendsetter that you are) champagne is scarce so you should prefer beer to champagne so you can have all the beer you want. This is effective only for a while, then beer becomes scarce, so you should switch back to a preference for champagne, which is now abundant. I don’t think this is a problem, actually, since it just shows that what it’s rational to prefer depends on market conditions, as it were. That you drive the market is just an artifact of the case.

This sort of instability could very well be a problem for the view, however, if there were a case in which immediately upon changing your preference you would be rational to change it back. But I don’t think there could be such a case, for in such a case the hypothesis is not satisfied that you would better satisfy your new preferences if you changed from your old preferences to new preferences.

If the problem you’re getting at is that Duncan has said that you should change your preference when you would get “more preference satisfaction both in terms of [your] new preferences and [your] old preferences” (your comment) so that being able to get more champagne after you no longer prefer it somehow seems relevant, then I agree that it is a problem for Duncan’s view. I don’t think it’s insurmountable, but I think it’s a problem that he hasn’t sufficiently addressed.

Duncan, what have Bruno and I missed?

Don,

Good to hear from you. I was not careful enough, the trendsetting thing is distracting. I just brought it in to stipulate that the reason your initial preferences are frustrated may be due to your environment.

You say in the first paragraph of your proposed answer on behalf of Duncan " But you do have a reason based on the promotion of your well-being, as measured by preference satisfaction (whatever those preferences are), to change to a preference for beer." First, I thought we were talking about preferences and assuming that only these things give reasons. Secondly, if you have a preference for the promotion of you well-being, whatever that consists in, you may very well be right. But I think it a very peculiar preference (you prefer that your preferences be realized...?) and, more importantly, is that a preference you *must* have. I doubt it and then the Champagne Beer problem comes up again.

But Duncan needs to enlighten us... :)!

And now I will be quiet and wait what David Gauthier will say!

Bruno,
Real quick: You're right that I have to be thinking that we can have reasons other than ones that issue directly from preferences. I wouldn't want to posit the sort of preference you mention to promote one's well-being. I'll be quiet too, though.

For Bruno and Donald

Duncan MacIntosh

Since your questions are more for me than for David, I’ll answer them now while David is formulating his replies to the questions for him. (I cleared this with Hille, who says she only interjected her note about procedure so people wouldn’t be expecting replies from David until Wednesday.)

I assume for the sake of argument that in having preferences, one by that very fact has reason to do what advances them, i.e., what maximizes given them. In PDs and DPs, other agents will do things advancing of your preferences (co-operate with you, lower the odds of their attacking), only if the other agents can predict you’d behave in certain ways towards them (co-operate with them, retaliate against them if they attack). The agents won’t predict this if you retain your current preferences (for minimization of your jail time, for minimization of total harms to all people) because it would not advance your preferences for you to do these things. Under these conditions, since you have a rational duty to advance your preferences, and since this requires you to get others to expect certain behaviours from you, and since others will do this only if you acquire preferences that would be advanced by these behaviours (I’m assuming the situations are ones in which you can’t bluff), it is rational for you to undergo a change in your preferences. With the new preferences, when it is time for you to co-operate or retaliate, you will do so, because these things will be maximizing on the new preferences and so will be rationally obligatory.

Note that in the foregoing I do not assume that agents have a background or meta- or higher-order preferences for the satisfaction of their preferences; nor do I assume that agents have background, meta-, etc., preferences to have satisfiable preferences. Some agents may be like this, some not; and it’s not obvious how to show that it would be rationally obligatory to be like this. So instead, I take agents as I find them. For now, let’s assume we’re talking about agents who don’t have the foregoing kinds of meta- etc. preference. Next, I am not suggesting that it is rational for an agent of the sort I’m talking about to change to new preferences only if she expects the new preferences to be able to be satisfied. There is an interesting discussion to be had about whether an agent can rationally prefer what she knows she can’t have. But I have nothing to say on this, except that, sometimes, there will only be reason to change one’s preferences if one can expect that the new preference is one that one has some hope of satisfying. This would hold where you are changing your preferences in order to get yourself to be motivated to do an action other agents’ expecting you to be able to do which is a condition of their doing something advancing of your initial preferences. Next, I am not suggesting that part of what rationally entices the agent to change her preferences is the anticipation of the satisfaction of her new preferences (although see the foregoing caveat). So I am not suggesting that the agent, with her current desires, has reason now to want any future desires she may adopt to be satisfied (except for the above caveat sort of case). For since she does not now have the desires, she would not now anticipate getting any utility of the sort she cares about from satisfaction of these future desires. The only utility -- preference-satisfaction – she finds worth caring about now is that which comes from the satisfaction of her current preferences. Instead, she is moved to change in her preferences solely because she hopes this will bring about what she now prefers (i.e., prefers before making the change). Nor am I making the Stoicist suggestion that the agent should adapt her preferences so that she prefers only what she has a chance of attaining. I’m not saying “come to like what you can get” (for that would destroy your motivation to try to get what you now want and so be anti-maximizing); I’m saying “change what you like when changing it is a means to bringing about what you originally liked”.

For more on this, see my:

https://philpapers.org/rec/MACPAT-6

But Bruno introduces an important sort of case. Sometimes what I prefer is [X]. Other times, what I prefer is [the utility of attaining X], defined as [X obtaining, me knowing X obtains, and me still preferring that X obtain at the time of X’s obtaining]. An example of the first is wanting world peace – I don’t necessarily need to be around to see that, and maybe me sacrificing my own life would bring about world peace, and I’d have reason to sacrifice myself for that even though I won’t be around to see it, and so won‘t then desire it, and so won’t then get the utility of it. An example of the second is wanting to be able to enjoy drinking nice wine. That desire can be satisfied only by me drinking nice wine, knowing I’m drinking it, and still desiring to be drinking it while I’m drinking it. For the latter is, let us suppose, a necessary condition of me enjoying the wine. The first sort of desire can in principle be advanced by its own revision. Perhaps a philosopher’s demon will agree to bring about world peace just in case I cease preferring it -- he wants to corrupt my soul. A good deal for me, since I care about world peace, not about my soul. The second sort of desire in principle cannot be advanced by its own revision. The philosopher’s demon offers to give me nice wine provided only that I stop desiring to be drinking nice wine. A bad deal for me: changing my desire gets me the wine, but makes my enjoyment of drinking it impossible, and so does not bring about a state of affairs that satisfies my original desire.

For more on this, see my:

https://philpapers.org/rec/MACPAT-2

I concede that my alternative to Gauthier’s proposals won’t work for cases where agents have desires of the second sort, only for the first. But I think lots of our desires are of the first sort.

It is easy enough to distinguish one sort from another when we are talking simply about desires – the agent desires X in one sort of case, X plus the enjoying of X in the other sort. But it is harder to distinguish the sorts when we start talking of preference-rankings rather than simple desires. Here I think it would take careful analysis to do the sorting, and I haven’t previously thought this through.

I’ve been assuming that agents’ preferences in PDs and DPs are of the advanceable-by-revision sort – the PD agent is said to prefer more freedom from jail to less, not the enjoying of more freedom from jail; the DP agent is said to prefer fewer harms to all to more, not the enjoying of the attaining of fewer harms. So these agents’ retaining their desires is not a condition of their satisfaction – of what they are desires for coming to pass.

But let’s make it hard for me and assume enjoyment is desired too. We shall assume that in the PD, the agent begins preferring THE ENJOYMENT OF unilateral defection (which gets her 0 years in jail) to mutual co-operation (2 years), the enjoyment of mutual co-operation to mutual defection (3 years), and the enjoyment of mutual defection to unilateral co-operation (4 years). Suppose she changes her preference ranking into a ranking in which she most prefers to enjoy co-operating with those who will co-operate in turn just with those prepared to co-operate (2 years), second-most prefers to enjoy participating in unilateral defection (0 years) to mutual defection, third-most prefers to participate in the enjoyment of mutual defection (3 years) to unilateral co-operation, and fourth-most prefers to participate in the enjoyment of unilateral co-operation (4 years). Then the other agent will co-operate, yielding our agent the enjoyment of mutual co-operation (2 years) rather than that of mutual defection (3 years). She originally preferred enjoying 2 years to 3 years to 4 years, and she’s gotten the enjoyment of 2 years over 3 years, so she’s done well by the measure of her original preference. Her continuing to most prefer 0 years is not needed for her to have done well by the measure of her original preferences, so she was not self-defeating in changing her preferences. I think similar things can be said of the agent in the DP, however I’ll leave the demonstration of this as an exercise for the reader. (I’ve always wanted to say that!) Another time, I want to think more about the identity conditions on desires, preferences, and utilities -- the conditions under which preferences are preserved, parts of preference-functions are preserved, the utilities had from satisfying attaining something ranked in one preference function are identical to the utilities had from satisfying something ranked in a different position, or relative to different things, in another preference function, and so on.

But something I never thought of before is that some desires of the enjoyment sort might be able to be advanced by agents being able to make choices in the way Gauthier imagines where they cannot be made in the way I imagine; for his way, if it works, preserves the agents’ preferences; and this might be a very important virtue of Gauthier’s theory over mine. I want to think about this more. (Note that we face related issues: his agents must refrain from doing things that would get them more of what they prefer, so they won't get it; my agents can't find it rational to advance certain desires by changing them, and so won't get some of the things they desire. The agents are out of luck on the same things in both theories, but for different reasons.)

A final point. Bruno, with many others, imagines that we come to desire things by seeing what’s desirable about them; and that we couldn’t rationally change in our desires unless we discovered we were wrong about the things we desire being desirable. In Bruno’s example, Champagne is seen as more desirable than beer, and the mere fact that desiring beer instead would get you more Champagne is not a good reason to switch to desiring beer. (Actually, this example illustrates two things: the enjoyment issue, already discussed, and the issue I’m now addressing. Let’s just focus on this second aspect here.) I don’t know if there is such a thing as inherent desirability. But even if there is, I think in some circumstances it can be rationally obligatory to cease desiring the desirable; and that this would not just be a case of rationally motivated irrationality, but of rationally motivated extensions of rationality. I take this up in my recent _Ethics_ article for the Gauthier Symposium, so I’ll be brief about it here.

The view Bruno is suggesting is in effect this: if you learn some X is desirable, rationally you should always desire X, always intend to bring X about and always act to bring X about. But now suppose the only way to bring about X is to conditionally intend to do something preventing X in some unlikely conditions (the DP is an example); suppose that to intend to do something against X you’d have to stop desiring X. And suppose there is an action, A, the doing of which will result in you ceasing to desire X, and so in you forming the conditional intention to act against X. What does Bruno’s theory of rationality say you should do vis a vis A? Well, his theory issues a contradiction: you should do A because this would advance X, but you should not do A because doing A will result in you ceasing to desire X and in you intending to do something against X. That is, doing A is rational and not rational. No theory of rationality entailing contradictory descriptions of the rationality of an action can be the truth. So this theory is false. The truth, I argue in my article, is that the rational rightness of desire, intention and action cannot all be grounded in the presumed permanent desirability of something. Instead, rational rightness of desire, etc., must be thought to track the desires one would have to adopt in order to advance something initially posited as the right desire (or failing that, something initially posited as your starting desire). So if, to advance the desire for X, you’d have to do an action whose effect would be to make you cease desiring X in favour of desiring something else, Y, and make you intend to act against X, then the new thing, Y is what, relative to you, rationally right desire would desire, rationally right action would aim to bring about, and rationally right intention would intend to bring about. If this is right, then it CAN be a reason to stop desiring something that this would bring about what it is a desire for.

OK, now back to teaching myself the nuances of David’s theory, something which, by more of the sort of practice I've been engaging in here, I hope asymptotically to approach in the limit of inquiry represented by the end of this Pea Soup event!

Duncan.

Gijs – on the proviso

The proviso, as I see it, is an essential element of cooperation. It is intended to rule out behaviour that deliberately worsens the position of those with whom one is interacting, in order to benefit oneself. It does not stop a man in the lifeboat from keeping all the water to himself, since others suffering from thirst is collateral damage (which doesn’t make it right, but we need more than the proviso to condemn it); the benefit would be the same for him if he were alone in the lifeboat. But the lifeboat is a situation in which the presence of others, and the need to interact with them, is unwelcome to each.

I am concerned to defend the proviso in the context of mutually beneficial cooperation. We show our willingness to cooperate by (among other things) not taking advantage of potential fellow cooperators. If you can worsen my situation by calling on your gang of thugs to trash my business, then offering me protection from physical damage in return for a substantial premium may be mutually beneficial at the time of the offer, but the benefit depends on a prior violation of, or a present willingness to, violate the proviso. It affords no basis for cooperation.

Any form of redistribution may involve a proviso violation, and so is prima facie wrong. Whether the violation is actually justified depends on (among other considerations) whether it rectifies an earlier wrong. Thus Robin Hood carries off the harvest from the land wrongfully appropriated by the Sheriff of Nottingham, and bestows it on the rightful landowners, the peasants who have fled to the Forest of Arden to escape the Sheriff’s attempt to make them serfs on what is rightfully their freehold. Robin violates the proviso, in order to rectify the wrong done by the sheriff, and this is fully justifiable. Suppose however that the Sheriff and townspeople of Nottingham are hardworking craftsmen, dependent for their food on the produce from the fields bordering on the city, and Robin seizes this produce and gives it to the petty thieves and highway robbers who infest the Forest of Arden. Here Robin’s violation of the proviso is not merely prima facie wrong.

(Cont) On cooperation, and its superiority to maximization

Paraphrasing Hiobbes, we might say that the first requirement of rationality is ”Seek cooperation, and follow it.” And where cooperation is not to be found, then by all means we can, defend ourselves. A person who seeks to benefit at another’s expense, making himself better off by making the other worse off, is not following the first requirement of rationality. He is not defending himself, but attacking the other, In doing this, he shows himself unfit to be a party to cooperative arrangements. That he may nevertheless be successful in taking advantage and that he may be maximizing his expected utility is not to the point.

The point is this: For any non-zero-sum interaction, each cooperator may expect to do at least as well, and in some interactions better, than if he and his fellows were maximizers, This establishes the superiority of cooperation to maximization in interactions. That a person might be able to subvert an ostensibly cooperative interaction to increase his utility expectation is irrelevant. A cooperator denies that maximizing considerations constitute (good) reasons for acting in interactions. There is no underlying maximizing consideration that the cooperator would consider a good reason for acting.

This does not mean that cooperators have no use for maximization. Where cooperation is to be had, they do indeed have no use for it. It doesn’t play any part in their deliberations. But where it is not to be had, then persons who behaved cooperatively would be exploited by their non-cooperative fellows. Cooperation and Pareto-optimality are primary, but maximization and equilibrium provide a default position which any person can adopt, if cooperation is not to be had. Note that cooperation cannot be adopted unilaterally. Cooperation prescribes an action for each person, but the action is to be performed only if the person reasonably believes that (most of) the others will perform their prescribed action. Maximization can be pursued unilaterally; the prescriptions issued by maximization refer only to the person’s expectations of what his fellows will do, whatever that may be. Cooperation makes no prescription to persons who expect (enough of) their fellows not to cooperate; maximization prescribes for any expectation of what others will do.

Bruno:, also Chris: the role of the social contract

As I see it, the contract is what gives normative force to rules. We ask of any rule, practice, expectation, etc. – Is this a consideration that rational persons could have accepted in an initial situation (or as following from an initial situation) in which they and their fellows were choosing the normative elements of their society? If it could, then it passes the weak contractarian test. If no alternative could have been chosen, or if it would be part of any set of considerations that could have been chosen, then it would pass a much stronger test; it would specify a requirement and not an option. The insistence on rationality limits what could be agreed to, and also what must be agreed to. Rational societies differ in the optional items, but must be agreed on the necessary ones. (Thus any society, to be rationally acceptable, must provide for the making and keeping of commitments. Arguably, any society to be rationally acceptable must provide some constraints on dress, but different societies will have different constraints (women may / may not expose their breasts publicly).

Chris, David Sobel: morality and the social contract

The contractarian thinks that there is nothing for morality to be other than the fruit of the social contract. There are no suitable alternative normative frameworks other than that provided by cooperation. Maximization is an alternative framework, but it contrasts with morality on virtually any understanding of the two. Kantianism appeals to an alternative account of rationality, but one that I would argue has little plausibility. (See “The Unity of Reason: A Subversive Reinterpretation of Kant” Ethics 1985, reprinted in my collection of essays “Moral Dealing”.) Utilitarianism is just confused. I do not deny that there are values beyond those considerations that are accommodated in our utility-functions, such as aesthetic values, but these do not ground obligations. But I would be ill-advised to embark here on the distinction between aesthetic necessity and practical necessity.

Chris, Bruno: the hypothetical contract

If the social contract gives normative force to rules etc., then (since it is a hypothetical contract), hypothetical contracts can play a justificatory role. Morris quotes from p 619 of 25-On; what I say about the contractarian test seems to me a convincing answer to objections that, not being real, hypothetical contracts cannot justify. I think that I show how what the objectors say can’t be done, actually is done. They seem unconvinced, but I do not know what more to say.

Chris, Davd Sobel subjectivity and relativity

I can do without some of the things that are meant by subjectivity in value theory. But I cannot do without agent relativity. The utilitarian supposes that all states of affairs are to be valued in terms of the amount of something-or-other (happiness for J.S. Mill) that they afford. Persons would then have identical utility functions and the problems of interaction would vanish.

Chris the contractarian test

I don’t see the problem in subjecting legal systems to the contractarian test. They may, of course, fail it. But I don’t suppose that in the real world, any legal system would or could get a perfect score on the test. And I am quite sure that some would do better than others (those that tend to promote a democratic society of autonomous individuals who share liberal values would get the top marks). When I say that the expected outcome of a fully rational interaction would be Pareto-optimal, I do not condemn all other outcomes. If the interaction prescribed for me would, given that most others perform their prescribed actions, yield an outcome that falls slightly short of Pareto-optimality, and that delivers significant benefits to each (normal) person, then this is likely to be as good as it gets (see Joseph Heath, The Efficient Society). To refuse to play one’s part because the outcome would not be fully optimal and would be somewhat more beneficial to, say, farmers than to intellectuals would be irrational, from the cooperative standpoint.

Chrisoula –the toxin puzzle

The cooperative approach claims that where an individual has entered into a beneficial agreement, he has reason to perform his part of the agreement, provided he may expect the others to do likewise. If the others perform first, and do their part, then he must do his part without regard to its effects on his utility (in the absence of circumstances widely different from those anticipated in making the agreement). We suppose that if the agreement gained the free, unforced consent of those party to it, its outcome is as close as can be expected to Pareto-optimality.
What creates discomfort with this way of analyzing the toxin puzzle is that the situation does not involve cooperation. The experimenter does not care what the potential toxin drinker does; she cares only about what he intends, and while this determines what she does, she presumably is indifferent – she’s simply carrying out an experiment to determine how people behave in this type of situation. The potential drinker presumably wants to get as much money as he can. He therefore wants to convince the experimenter that he intends to drink the toxin (but not drink it). But if he has reasoned through the situation, and realizes that he will have no reason to drink the toxin, he will find himself unable to form the necessary intention. If this argument is vulnerable, it is surely at the premiss that he will have no reason to drink the toxin.

In the past, I have tried to show that in forming the intention to drink, I give myself a reason to drink. I no longer find my argument convincing. I do not know what to say about the toxin puzzle.

Michael Moulder and Peter Timmerman: Hobbes, Rawls, and me

I see my theory as becoming more Hobbesian and less Rawlsian. (And I see Hobbes becoming more Gauthierist and Rawls becoming less Gauthierist (I do not suppose that Rawls has my theory in mind in the way that I have at certain times have had his theory in mind.)

The Rawls that awakened me from my ordinary language slumbers is the Rawls of Robert Paul Wolff. Malheureusement, I do not have Wolff to hand, and no time to get hold of a copy. This is the Rawls who thought that the theory of justice could be part of the theory of rational choice, a position that I was happy to embrace, but which Rawls came to think a (major) error.

I took the stance of a revisionist first in bargaining theory, and then in the theory of rational choice. Only later did I come to realize that bargaining theory was erected on the basis of a mistaken theory of rationality. And I would no longer describe myself as any sort of maximizer – the maximizing theory of rationality, as a general theory, needs to be rejected, not modified.

Hobbes says that we are to seek peace and follow it, and only if peace is not to be had are we to defend ourselves by any means we can. He also says that a man who thinks it may be reasonable to break his covenants with those who benefit him is not to be admitted to society. Hobbes is on the verge of grasping the fundamental distinction between deliberating in the way that best realizes one’s concerns and deliberating in terms of realizing one’s concerns. He is, that is, on the verge of embracing my contractarian theory. Rawls is playing in another ballpark.

To derive moral conclusions from non-moral premises, one must show what these premises are. Once I hoped that maximizing premises would do the job. Now I think that Pareto-optimizing premises are required.

Reasons for action and theories of rationality

If one thinks that an interaction is fully rational only if it is Pareto-optimal, then one will not think it rational to violate one’s agreements because doing so would maximize one’s utility. If one thinks the standard of rationality in interaction accommodates the concerns of all the parties to the interaction, one will not think one has reason to act in ways that accommodate only one’s own concerns. Cooperators and maximizers differ about what considerations constitute reasons for acting. That the members of a community of cooperators will expect to do better than the members of a community of maximizers is relevant to determining which is the correct theory – or, if we suppose that there are other alternatives, which is the better theory. Whether the fact that a certain action would maximize my utility is always a good reason for acting depends on whether it is so recognized by the best theory. If cooperation is the correct theory, then the person who violates his agreement to increase his utility does not have a good reason for his action.

Duncan – the reversion problem

This would have to be my core move against Duncan MacIntosh. Just as Ariel Rubinstein led me to see that I could not begin with non-cooperative games if I wanted a cooperative conclusion (no doubt he would be surprised to learn that his talk had had that effect, and it is very likely that he knows nothing of my work, unless Ken Binmore has mentioned my name in the context of philosophers who think they understand microeconomic theory but are egregiously confused).so, to come to the main clause of this sentence, so Duncan has convinced me that I can not begin with persons who are utility-maximizers of any stripe, if I want to defend Pareto-optimality as the principal condition in a theory of deliberative rationality. Duncan presents me with the reversion problem, and the reversion problem, I believe, can be resolved only by treating persons as cooperators by choice, and maximizers only by necessity.

Duncan is clearly right to argue that if rationality depends exclusively on forward-looking considerations, then the twists and turns of constrained maximization will be futile; the reversion problem will reemerge. So rationality must be based, at least in part, on something else. And the something else, I have repeatedly insisted in these comments, is Pareto-optimality. Rational persons will not leave unrealized benefits that some could attain at no cost to the others. They may be tempted to depart from cooperative arrangements, but temptation only pretends to be rational.

I am sympathetic to “Yet Another Reply to the Foole”. This seems close to the position I now embrace. We escape the reversion problem by showing that the considerations that would give rise to it are not reasons for acting according to our best account of rationality. That I could maximize my utility by breaking my agreement would not qualify as a reason for acting in the context of cooperation, and cooperation is the core of our best account of rationality. It is rational for persons to be cooperators and rational cooperators do not accept utility-maximization as a reason for violating agreements.

Bruno and Donald: preference revision

I have grave misgivings about putting preferences into play. A person’s preferences lead him to a sub-optimal outcome; he revises his preferences so that the outcome is optimal given either his new preferences or his old ones. But if he really has revised his preferences, then why would he care whether the outcome would be optimal for them? And how does he come to adopt his new preferences? – on my view, preferences are exogenously given as inputs to deliberation. I’m more comfortable with a conception of rationality that doesn’t invite monkeying around with the deliberative inputs.

Boram Lee The Prisoner’s Dilemma and a variant

Maximizers argue from strategy to outcome, cooperators from outcome to strategies. Yes. And what persons care about are outcomes.

The PD, as I understand it, is a 2-person 2-strategy interaction with payoffs such that each person has a strongly dominant strategy, and the outcome if each selects his dominant strategy is Pareto-inferior to the outcome if each selects his dominated strategy. It s non-cooperative in the sense that each person chooses what to do independently. Played by maximizers, it results in a non-optimal equilibrium. Played by cooperators, it results in a non-equilibrium optimum. Maximizers adjust their strategies until no one can benefit from changing his strategy given the strategies of the others; cooperators adjust their strategies until no one can benefit by changing his strategy given the payoffs to the others.

Now your variant on the PD. If the persons have agreed on the mutual 2nd best, then on my view they act rationally in choosing the strategy that, given similar choice by the other, yields the agreed outcome. Agreement-keeping is necessary to cooperation. No problem so far. But what if they have not made an agreement? They still consider it rational to reach a Pareto-optimal outcome. They might ascertain if any outcome was both optimal and in equilibrium, and if there was a unique outcome satisfying both, aim for it. This would lead them to the upper right outcome in your game. Or if there were a range of optimal outcomes, they might be drawn to one that seemed to give the persons approximately the largest possible equal proportionate shares of the benefits from cooperation, and suppose it to be the lower right outcome in your game. That there seem to be alternative possibilities indicates the need for further study.

Bruno: Contractualists and Contractarians

Contractualists argue that we must justify our actions to others by showing that they would accept these actions as part of the social contract. Contractarians argue that we must justify constraints on our actions by showing that we would accept these constraints as part of the social contract.

On David’s Reply to Chrisoula about the Toxin Puzzle

Duncan MacIntosh

David, first let me say that I often find myself pausing over one of your sentences, marveling at its compactness, its insight, its suggestiveness. But I can admire you at my leisure. Now back to discussion.

I’m not sure why you scruple to solve the Toxin Puzzle. Perhaps I have been too influenced by the reading of you given by Preston Greene in his paper “Must a Rational Agent Choose the Best Option at the Time of Decision?” at the York conference on your work two years ago. Or perhaps his reading is the best reading. At any rate, I take it that what is now foundational to your theory of reasons is not that one should seek co-operation where possible. That is something derived from something more foundational, namely, that we have reason to do whatever the right life-rule would say to do. And the right life rule is the one our following for our life would give us the best life by the measure of our preferences (or by whatever measure is posited as the aim of rational choice, if that is something other than best advancing our preferences). That rule tells us to form and fulfill intentions to co-operate in PDs. But surely it would also tell us to form and fulfill the intention to drink the toxin? It matters not that the Toxin Puzzle is difficult to understand as a potentially co-operative interaction due to the eccentricity of the motives of the experimenter. What matters is that one has reason to form and fulfill whatever intentions the best life-rule would say to form and fulfill. And clearly in the Toxin Puzzle you do better by forming and fulfilling the intention to drink than by forming and fulfilling the intention not to drink, so surely the best life-rule would say to do this; so you have reason to form (and fulfill) the intention to drink.

I expect you could say something similar about being “In the Neighbourhood of the Newcomb Predictor” (the title of an article of yours): what would the best life-rule tell you to do if you are about to be assessed for your choice basis by a Newcomb Predictor? Surely the rule would say that you should form the intention to choose only what’s in the opaque box. Matters are less clear if you are told you’ve already been assessed by the Newcomb Predictor. Now, your intentions cannot influence him to put a million in the opaque box, and he has had no opportunity to see you having considered the matter and having formed an intention to choose only the opaque box. So maybe there the best life-rule would say to pick both boxes.

But suppose I’m wrong and the impulse to co-operation is foundational for you. Then we can still represent forming and fulfilling the intention to drink the toxin as rational, since we can represent the Toxin Puzzle offerer as someone who wants to give money only to those who can confidently be predicted to drink the toxin, as evidenced by their sincere intentions. Ditto for the Newcomb Predictor: we can represent him as someone who wants to give large amounts of money only to someone he can confidently predict, on the basis of their sincere intentions, will leave a little money on the table, the money in the transparent box; and you can afford him a basis for this prediction by forming the intention to take only what’s in the opaque box.

On Preference Revision:

Duncan MacIntosh

David, I recognize your concerns about preference revision. I don’t expect you to agree with the idea; and nor would I want you to switch programs – yours is far too interesting in its own right. But for what it’s worth, my reply to Bruno and Donald in the lull period between the first and second rounds of this blog event may go some way to answering your worries, while at the same time affording you additional reason to favor your own approach.

As for your specific questions: You ask, why would someone who has changed his preferences care whether this will result in an outcome optimal for them? The answer is that, before changing, he as it were wants the best outcome for his current preferences by virtue of having them; and after changing, he wants the best outcome for his new preferences by virtue of having the new preferences. From neither vantage does he care about the other preferences’ satisfaction except, prospectively, when he sees that changing his preferences will help bring about what he currently prefers only if his new preferences would be satisfied (the caveat case I mention in the earlier post); or retrospectively, so far as his new and old preferences will often overlap in what they are preferences for. (Again, see the earlier post). I do not see it as necessary to him being moved to act on his later preferences that this will somehow advance the earlier. And in fact, the entire point of adopting the new preferences is to become motivated to do actions that precisely would not advance the earlier. As measured by one’s old preferences, one benefits from changing to the new, not from acting on the new.

Next, you ask how the agent comes to adopt the new preferences. Well, if I’m right that undergoing preference revision is rational, then it comes about automatically and non-actionally, on analogy with the way beliefs automatically, non-actionally change in response to evidence, and on analogy with the way instrumental desires change when one discovers that what they are desires for will not really advance one’s intrinsic desires as well as some other thing available to be instrumentally desired.

On Why One Should Obey the Proviso:

Duncan MacIntosh

David, I’m unclear why one is to obey the proviso. From Susan and from some of the things you say, I have the impression the reason is to evidence to others that one is open to co-operation, thereby to solicit from them a preparedness to enter into co-operative arrangements.

Four points about this. First, this is a reputational effect; and even straightforward maximizers have reason to do things like this to cultivate a good reputation. So is there some special reason for doing it on your new theory of rationality?

Second, what is the connection between this and getting people to co-operate with me in one-shot games? After all, so far as my motivation is sourced in expected reputational effects, my street cred as someone who won’t violate the proviso in iterated games, or in games where I accrue a reputation for future games, should do nothing to increase people’s confidence that I will co-operate with them in one-shot scenarios. For by definition there are no reputational consequences for defection in one-shot scenarios. If I’m to face a one-shot scenario after having faced a series of reputational scenarios, then for all others know, I’ll defect in that scenario, for the usual reasons plaguing finitely iterated games.

Third, then, what is the relation between this and your long-standing requirement that people must be transluscent if they are to have reason to co-operate with each other? That is, the requirement that they be able to tell what they have as bases for choice?

Finally, in MBA it appeared that the reason to obey something like the proviso was that, if one tried to bargain from a position that had already been improved by predation (in effect, proviso violation), one could not attract the person predated to a deal, or could not reasonably expect the deal to be stable – could not expect the person to abide by it. What if anything is the connection between the MBA motivation and the motivation on your new theory?

Continuing the line of thought Morris started suggesting that Gauthier’s view can drop the assumption that a person’s reasons are determined only by what is good for her or what gets her what answers to her non-moral desires. Gauthier suggested he does need agent-relativity, which seems true to me. I am wondering whether that is all he needs. I don’t have an agenda here—just genuinely wondering what assumptions about reasons the view needs to make.

Suppose, as Morris suggests, everything is just done in terms of the best account of reasons. Since Gauthier’s view is that “there is nothing for morality to be other than the fruit of the social contract” I take it his view is incompatible with an account of reasons that maintains that I have direct, not mediated via a social contract, moral reasons to help others. I guess I am assuming that if the only thing for morality to be is just the fruit of a social contract, then the only thing for moral reasons to be is the fruit as well. If that is right, I am wondering if the view is only compatible with pictures of reasons that have it that X’s moral reasons are built out of, and broadly instrumental to, X’s non-moral reasons. Some deontological views might have it that one has basic agent-relative moral reasons, so the above would be a contentful assumption. Again, I am not saying this is an implausible assumption, but rather just wondering if Gauthier’s view must make it.

Let me comment on the two Davids' comments (even though I have not had time to digest all of these notes, or even count the number of comments Duncan has made!).

- David G is right that what he needs is agent-relativity. I am very pleased as I have argued this for some time. Morals by agreement needs two things here: an anti-realist assumption (the denial of what David S says above) and a certain amount of conflict due to our different reasons for action. If all (or even most) of our reasons for action are agent-relative, then there should be enough conflict to generate the problems that morals by agreement is to solve. (The traditional doctrine of the circumstances of justice is crucial here.)

I argued some time ago that eschewing the value subjectivism of MA and merely adopting the thesis that our reasons are agent-relative would be a weaker premise for the moral theory. (I have a sketch of the argument for this view, as well as characterizations of the relevant concepts, in a ch. of my book on the state.)

- I also think that it's a mistake to think of morals by agreement as a theory of morality. I think some parts of morality are self-regarding, but even if we restrict our attention to the parts that are other-regarding, some of these may not be amenable to a contractarian approach. Sobel asks about reasons to help others. Some of these reasons are tied to duties owed to these others (directed duties), and these are naturally associated with the virtue of justice. I think this is the home of morals by agreement. But benevolence and charity also would have us help others (as do kindness, generosity, etc.). We may have some duties of charity, but they are not (I think) duties owed to others. And of course many acts of charity are over and above the call of duty. So I think that we can conjoin a non-realist constructivist account of justice with a realist non-contractarian account of other parts of morality. Gauthier will be sceptical or agnostic, but this possibility addresses Sobel's query above.

Professor Gauthier, thank you for taking the time to respond to my earlier questions. Here is a further question, if you don’t mind my asking it.

In regard to assessing different theories of rationality, you write in one of your comments: “Cooperators and maximizers differ about what considerations constitute reasons for acting. That the members of a community of cooperators will expect to do better than the members of a community of maximizers is relevant to determining which is the correct theory – or, if we suppose that there are other alternatives, which is the better theory.”

I wonder how this differs from the parametric test of the rationality of conditional cooperation that you gave in MbA (in allowing for other alternatives here, perhaps you are acknowledging the points made by Holly Smith and Peter Danielson in the Vallentyne anthology). But, more than that, I am curious what standard of assessment is being used to determine which is the correct or the better theory of rationality. Is it a maximizing standard or something else?

The Toxin Puzzle Chrisoula and Duncan

Assume the donor of the $1M cares about only one thing – his success at identifying intentions. There is a single optimal outcome – the person who must choose whether to drink the toxin forms the intention to drink it, the donor puts $1M in the chooser’s account, the chooser abandons his intention and does not drink the toxin. But how does he do it? How does he form the intention? How does he abandon it, without opening himself to the charge that he never really had the intention?

One can form an intention only if one thinks one will have reason, when the time comes, to perform the intended action . There are qualifiers but I shall ignore them in the interest of brevity. Drinking the toxin is sheer cost to the chooser; what reason can he have to do it? In “Assure and Threaten” I argue that it is rational to perform a costly action if it is part of a sequence of actions that result in a better outcome than one could have expected had the sequence not contained the costly action.

So a parallel argument applies here: one forms the intention to drink the toxin because the intention/action pair (form the intention/carry it out) leaves one better off than any alternative lacking the intention. One would do better with: (form the intention/don’t perform the action), but one cannot adopt that pair simultaneously.

I think this allays my worry about the toxin puzzle. Sorry to have kicked up an unnecessary dust cloud.

Life-rules and such Duncan

I’m uncomfortable about things so grandiose as life rules. I think in deliberating we begin by taking certain considerations as reasons and through reflection and dialogue come to refine our view, distinguishing explanation from justification in our deliberations and aiming at both. But here I want only to point in the right direction. I assume that our theory will treat explanatory and justificatory reasons, not as different kinds of considerations, but as different roles played by the same kind of entity.

In both roles reasons are considerations that enter into motivation. But a consideration may be a good reason for acting, and recognized by the agent as such, without even tending to motivate him . And a person may recognize that a consideration motivates her while denying it to be a good reason for acting. A person has various concerns that she will appeal to as justifying her actions when challenged. Necessarily, her wants, desires, goals, needs, her satisfaction, her fulfillment will figure among her reasons. If someone told us that the satisfaction she obtained from reading historical novels gave her no reason to read them, we would be puzzled. No reason at all? This person does not acknowledge a reason that fails to motivate her but denies that a consideration relevant to her concerns provide her with a reason. Her position seems untenable; if her acts were significantly disassociated from her acknowledged concerns, she would pose us a serious problem of interpretation.

But not all considerations that are taken to be reasons are so taken because they answer directly to the person’s concerns. Promises, and other forms of commitment provide reasons though they do not speak directly to our concerns. But we need such devices and we can show how they function to make our lives go more smoothly and happily. In our interactions, cooperation – coordinating our actions to achieve an outcome that is Pareto-optimal, or at least Pareto-superior to the present situation – affords reasons both to seek out possible cooperative arrangements and to conform to the expectations of cooperative arrangements already in place. Maximizing in an interaction, seeking one’s best reply to the expected actions of the other persons, provides reasons for actions when cooperation is not to be had or would be pointless (as in zero-sum games), or unnecessary (when the Invisible Hand is at work). But since cooperators overall do better than maximizers, maximizing considerations should widely be subordinated to those of Pareto-superiority.

Morality, rationality, and the social contract Sobel, Morris

I am a normative minimalist. I’ll explain that presently. First, a brief history of animal action.

Most animals just do things. Some sensory inputs trigger actions as outputs. As brains got bigger, this simple picture was complicated by the animal’s ability to represent these inputs to herself, and then divorce them from their stimuli, so that the animal can represent non-actual states of affairs and these representations trigger actions as outputs. I do not believe that any reductionist account can show why and how this happens. The most plausible story, which is not reductionist, may be that whenever existing circumstances – in other words, the universe - reach a certain level of complexity, beings embodying that level of complexity emerge, who cannot be fully described by the available concepts, or explained by the available laws. One cannot understand life with the vocabulary and grammar of quantum theory. As brains got even bigger, some animals came to be able to submit their representations to critical reflection. They can determine, not only why they did what they did, but whether it made sense for them to have related their actions to their representations in the way that they did. And they can change their ways of acting because of their critical reflection. These animals inhabit normative space. Norms explain their actions, and justify (or fail to justify) them. (And these explanations can not be reduced to any other kind of explanation, neither at the Darwinian evolutionary level, no at the quantum level or any other.)

We are such animals, inhabiting a world of norms. But where do these norms come from? The short answer is that they are rooted in the representations that replace sensory stimuli as primary determinants of action. I have said that wants, desires, satisfaction and the like necessarily provide reasons for action. They are then the sources of many of our norms.

But not all. Critical reflection gives rise to new norms, and undermines some older ones. As we learn more about homosexuality, both the norms for homosexual behsviour and the norms for relating to such behaviour are undergoing radical change. Critical reflection leads us to see connections among norms, and relations between norms, that we had not previously appreciated, and that change the normative landscape. But the landscape is in a deep sense instrumental – norms direct us to interact in ways that bring mutual benefit.

Is this all? . Kantians see in reason a source for further, deeper norms. And they relate morality to these norms of reason. A rational being, one guided by these norms, is entitled to respect But I don’t find – and in this I am surely not alone – norms of practical reason that would take us past the boundaries of the proviso and the contractarian test.

Believers in a deity see that deity as the source of norms that bind us as his creatures. (I think Locke in the Second Treatise takes this view.) But I find no deity. And then there are those who suppose we have a direct awareness of the main features of morality. We just see that the norms of morality are binding on us. My vision must be faulty.

What these views have in common is a conception of the world in which morality binds independently of benefit, or in which reason plays a substantive role in determining what we should do. But I see no need for either reason or morality to have ends of its own, as it were. So I think David Sobel would be right to suppose that on my view, reason would be limited to being, perhaps more than merely instrumental, but constitutive of a good life, in which other constituents set out the substantive features, and reason is focused on the procedures and decisions required to construct a good life out of the substantive constituents.

Does it follow that moral reasons are built out of and broadly instrumental to a person’s non-moral reasons? No. Insofar as moral reasons are related to procedures and decisions, they add to and are not part of the non-moral reasons. But I do have to suppose that moral reasons lack substantive content. Thus honouring commitments, keeping promises, telling the truth, dealing fairly with others, would all fit into my conception of a moral reason. They would all fit into the idea that morality covers the rules for interacting with strangers. In a broad sense, they secure justice. Interactions with friends and relatives are a different matter.

I said earlier that I was a normative minimalist. I want to keep the features of the normative landscape to a minimum. So I want normative considerations to be all part of a single picture of normativity. I claim to find a place for morality in a world in which each participant is concerned with the quality of his or her life and recognizes the central role of cooperation in realizing quality of life. To introduce Kantian reason, or the divine, or direct perception, is to treat morality as independent of the normative considerations that constitute critical reflection.

I admit that it would be better to call “Morals by Agreement” a theory of justice or of social morality than a theory of morality simpliciter. I think that charity, benevolence, gratitude, generosity, compassion, sympathy are tied to our emotions in complex ways whereas justice is and must be free from those emotional ties. It would be wrong to bring emotions into justice, wrong not to bring emotions into benevolence or sympathy. Justice and benevolence have long been treated as if they were parts of a single subject – ethics or morals. But they aren’t birds of a feather and they don’t flock or belong together.

But I want to reject Chris’s suggestion that we want a realist account of those parts of what we call ethics that are distinct from justice. A realist account seems to me as implausible for them as it is for justice. Non-contractarian or non-constructivist it may be, but not realist. Though perhaps I do not understand just what he means by realist.

The better theory Boram Lee

What is the standard for assessing theories of rationality? The better theory is the one whose output more nearly matches the particular judgments that survive critical reflection. And the surviving judgments will direct us to actions that promote or realize cooperation in bringing about Pareto-optimal outcomes, when such cooperation is to be had. The actions need not and typically will not be described in terms of achieving Pareto=optimality, and the interacting parties need not in some situations recognize the nature andextent of teir cooperstion. But that, I think, has to be the bottom line.

When cooperation is not to be had, then the only plausible fall back is to maximize – to seek one’s best reply to the expected actions of the others. But one should not put maximization and Pareto-optimization on the same level.

You may find my remarks in Life-rules and Such also relevant.

Some final remarks

I’ve enjoyed our discussions, and am more than grateful to my fellow participants, who gave of their time and energy to focus on my book. I was reminded again that a contractarian theory is beset with issues that I have not resolved, but the reminders were all part of a constructive dialogue. I think that “25 On” is an advance on “Morals by Agreement” in several respects: the introduction of the contractarian test (which first saw the light of day in “Political Contractarianism”); the shift from constrained maximization to Pareto-optimization, which now becom central to my theory; and the recognition that despite similarities, the problem of selecting one from a range of Pareto-optimal outcomes is not the bargaining problem (this meant giving up my first and most cherished way of connecting game theory to a rational view of morality).

Other improvements are largely omissions. I no longer think that a subjectivist account of value is necessary to my argument (though an agent-relative account is essential). I have recognized that labeling the perfectly competitive market a morally free zone was at best misleading; the market rests on moral suppositions (no force, no fraud). I realized that to try to prove the superiority of constrained maximization (or Pareto-optimization) to straightforward maximization by considering the payoffs of interactions among agents employing different strategic principles required a dynamic analysis rather than the static analysis of ch. 6 of the book and would likely result in showing that a mix of different kinds of agents would be stable, rather than establishing one kind as best.(I now employ a simple macro-argument: the right choice of actions enables everyone to enjoy greater ex[ected benefitthan can ve achieved by individual maximization. Finally, the argument for the rationality of the proviso has been returned to the home for bad arguments, thanks to Gijs van Donselaar. The proviso is important, but its role must be defended on other grounds.

Finally, let me commend Susan Dimock’s overview. She packs a lot of content into a few pages. And while her version of my theory is not in every detail what I would have said, it comes close enough that on another day I might prefer her formulation. I think I want to downgrade maximization more than she does, but her position may be closer to “25 On” than my present view is. So thank you Susan for putting it all so succinctly that my views are available to the world in a format such that anyone who wants to know my views needs only a few minutes to find them out.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


Disclaimer

  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.