Some Of Our Books


« Oxford Studies discussions at PEA Soup -- Launching New Series on Sep 30 - Oct 2 | Main | BEARS back online »

September 30, 2015


Feed You can follow this conversation by subscribing to the comment feed for this post.

Note: I have broken my comments into two parts in order to get around the blog’s spam filter.


Thanks to John and Kieran for the interesting discussion.

Like Kieran, I am sceptical about John’s objections to the Strong Belief Thesis—i.e., the claim that intending to X entails the belief that one will X. As Kieran alludes to, the efficacy of John’s argument will largely depend on our conception of “trying”. Consider the account of trying defended by Jennifer Hornsby, who defines trying to do something as roughly “doing what one can to do the thing”(Hornsby, "Trying to Act" p. 19). On one natural reading of Hornsby, trying to X involves doing all in your power to X. (I will call this the Hornsby account of trying, though it is likely to be a gross oversimplification of the picture that Hornsby herself endorses.) Given the Hornsby account, intending to try to X entails intending to do all in your power to X. Since bending at the knees is assumed to be something in the log-lifter’s power, and since the log-lifter does not intend to bend his knees, it follows that the log-lifter is being irrational when he fails to intend to bend his knees. He is failing to intend something (i.e., bending at the knees) that is necessary for achieving his end (i.e., doing all in his power to lift the log).

While the Hornsby account preserves the intuition that the log-lifter is irrational for failing to bend his knees, insofar as he genuinely intends to try to lift the log it also seems much too demanding to constitute a plausible account of trying. One does not need to do all in one’s power to X in order to try to X. One may, for example, decide in advance that one is only willing to put so much effort and no more into accomplishing some task. In such a case, one still plausibly counts as trying to accomplish that task. For example, suppose I am at an auction, and I am trying to purchase a vase. I have $500 on me. However, I have determined that I am not willing to pay more than $350 for the vase. Suppose that I bid on the vase up until the $350 mark, but stopping bidding when the price of the vase climbs above $350. Since I still have $150 in my pocket, I haven’t yet done all that I can to purchase the vase. However, it seems implausible to say that I did not try to purchase the vase.

In light of the preceding problem, I think the cognitivist should reject the Hornsby account. However, there are two features of Hornsby’s account of trying that the cognitivist may wish to take on board. First, trying requires a good faith effort. One does not count as trying if there is something one believes to be necessary for X-ing, but which one deliberately fails to do. Second, trying only requires doing those things that are in one’s power or under one’s control. This is an important feature of trying since it is meant to capture the idea that trying is something we often resort to when we are in doubt about our successfully completing some task. Even if successfully X-ing is not up to us, there may be things along the path to X-ing that are up to us, and trying involves doing those things.

The challenge that faces the cognitivist, who wants to exploit something like Hornsby’s conception of trying, is to provide an account of trying that includes the above features and that also allows us to make sense of examples like that of the vase-bidder and log-lifter. I believe there is reason for optimism that this challenge can be met. My next comment will attempt to offer such an account. If successful, my account of trying will allow the cognitivist to maintain that the log-lifter is rationally criticisable for failing to bend his knees even though he only intends to try to lift the log.


Consider the following suped up version of the Hornsby account:

S is trying at some time T1 to X if and only if for any Y, if S believes at T1 that doing Y at T2 is necessary for achieving X and S truly believes that doing Y at T2 is under S’s control, then S does Y at T2.

One distinctive feature of the immediately preceding account of trying is that it includes two temporal markers, T1 and T2 respectively. The point of the T1 temporal marker will soon become clear. However, a brief statement about the motivation for the T2 temporal marker is also in order. The T2 temporal marker is meant to address a worry alluded to by Kieran, albeit in the context of a different problem. Kieran observes that an agent may believe that intending some means, M, may be necessary for achieving some end E, and yet an agent may fail to intend M because she trusts that she will do so at a later time. Kieran observes that such an agent need not be guilty of irrationality. Let us call such cases self-trust cases. One way to handle self-trust cases is to insert a temporal marker into the Means-End Coherence principle:

Rationality requires that [if one intends to X, and believes that one will X only if one intends to Y at some time T1, then one intend to Y at T1].

Means-End Coherence* allows us to accommodate self-trust cases because it permits an agent to rationally refrain from intending a means believed to be necessary for some end until that time at which intending the means is actually necessary for achieving that end. (Incidentally, I believe the introduction of temporal markers into Means-End Coherence removes one of the primary motivations for Kieran’s rejection of its wide-scope formulation, but that’s a topic for another post.) A similar issue arises in cases of trying that involve multiple steps towards achieving some end. If purchasing a plane ticket requires that I book my flight by 6pm and pay for my ticket by 8pm, the T2 temporal marker ensures that I may qualify as trying to purchase my plane ticket at 7pm, even though all I have done is booked my flight. No doubt, other kinds of considerations may be invoked in order to further refine TRYING, so as to make it more precise. But the present (rough) formulation should be good enough to illustrate the possibility of offering an account of trying that meets the specifications that the cognitivist needs.

Subtleties aside, what makes TRYING of interest (in the present context) is that it allows us to preserve the intuition that the vase-bidder tried to purchase the vase. Let us assume, for the sake of simplicity, that whether or not the vase-bidder bids is completely up to her. We can therefore assume that the “doing Y at T2 is under S’s control” condition has been satisfied. As the auctioneer announces a new price for the vase—$150…$250….$350…and so on—what the vase-bidder believes is necessary for purchasing the vase is being constantly updated. When the starting price of $150 is announced, the vase-bidder comes to believe that she must bid $150 to purchase the vase. If she refrains from bidding $150 (i.e., doing what she believes to be necessary to purchasing the vase), then she does not count as trying to purchase the vase. However, if she bids $150 and refrains from bidding $250 at this stage, she still counts as trying to purchase the vase. Bidding $250 becomes a requirement for purchasing the vase only after she forms the belief that is necessary for doing so. However, this belief is not retroactive. It remains true that the vase-bidder tried to purchase the vase when she bid $150 since bidding $150 is what she believed was necessary for purchasing the vase at the time.

Let us assume that the vase-bidder continues to bid up to the $350 mark. When the announced price of the vase climbs above $350, the vase-bidder stops bidding. At this point, it seems natural to say that the vase-bidder has stopped trying to purchase the vase. Moreover, a natural description of the entire scenario seems to be that the vase-bidder tried to purchase the vase, but that (at a certain point) she stopped trying. At which point did she stop trying? She stopped trying at the point at which she stopped bidding—i.e., the point at which she stopped doing what she believed to be necessary for purchasing the vase.

TRYING also allows us to make sense of why the log-lifter is irrational for failing to intend what he believes to be necessary for lifting the log even though he only intends to try to lift the log. To briefly recap, John observes that the log-lifter may have tried to lift the log in the past without bending his knees. This, by John’s lights, suggests that trying to lift the log does not require bending at the knees. If it did, then it would not have been possible for the log-lifter to try to lift the log in the past without bending his knees. But now we can see where John’s argument seems to go wrong. It assumes that because bending at the knees was not necessary for the log-lifter’s previously trying to lift the log, it is not necessary for the log-lifter’s presently trying to lift the log. But there has been a crucial change in the log-lifter’s doxastic makeup between his past and present attempts to lift the log. During previous attempts, the log-lifter did not believe that bending at the knees was necessary for lifting the log. Of course, he was open to the possibility that it was necessary. But being open to the possibility that P is not to believe P. So, during his previous attempts to lift the log, he did not believe that bending at the knees was necessary for lifting the log. This explains how it was previously possible for the log-lifter to try to lift the log without bending at his knees. However, it does not follow that it is now possible for the log-lifter to try to lift the log without bending at the knees since the log-lifter now believes that bending at the knees is necessary for lifting the log. Like the vase-bidder, what is necessary for the log-lifter to count as trying to X changes as his beliefs about what is necessary for X-ing gets updated. Hence, even if the log-lifter’s trying to lift the log in the past did not require bending his knees, it does not follow that the log-lifter’s presently trying to lift the log does not require bending his knees.

What does this mean for the log-lifter’s intention to try? On the present conception of trying, intending to try to X involves intending to do whatever you believe at the time to be necessary for X-ing. Of course, as one’s beliefs about what is necessary for X-ing are updated, one may change one’s mind about doing what one believes is necessary for X-ing. For example, one may think that some action Y (while under one’s control) is simply not something one is willing to do. At the point at which one both believes that Y is necessary for X-ing and at which one refuses to do Y, one stops trying to X. At this point, you should also stop intending to try. Indeed, if you did not stop intending to try, you would be violating Means-End Coherence*. You would be intending to do something (i.e., trying to lift the log) without forming an intention you believed to be necessary for achieving that thing (i.e., an intention to bend his knees).

I will conclude by considering a possible objection to the preceding account. There may be a worry that the present account of trying is too strong because it entails that the log-lifter cannot intend to try, insofar as he believes that bending his knees is necessary for lifting the log and he does not intend to bend his knees. If this were right, it would indeed be a problem for the cognitivist since instead of explaining why the log lifter is irrational, the cognitivist would be explaining away the very possibility of the log-lifter being irrational. However, this worry rests on a mistake. Intending to try is an instance of intending, not an instance of trying. And like all other intentions one does not actually have to do what one intends in order to count as having the intention to do it. In other words, intending to try no more entails actually trying than intending to kick a ball requires actually kicking the ball. Moreover, while trying to X requires actually doing everything you believe at the time to be necessary for X-ing, intending to try to X does not. Hence, the present objection errs by conflating what is necessary for intending to try to X and actually trying to X. While the latter requires that the log-lifter bend at knees, insofar as he believes this to be necessary for lifting the log, the former does not. The upshot is that the log-lifter may intend to try to lift the log without actually bending at the knees or intending to do so.

I’d like to thank those who made this discussion possible, especially Hille Paakkunainen and the rest of the gang at PEA Soup, and the editor of OSME, Russ Shafer-Landau, and everyone at OUP. I’m thrilled that Kieran is introducing this paper, since a lot of my work on this topic, and interest in it, grew out of thinking about his excellent work. In his comments, Kieran presents several important and compelling objections to the paper. I’ll offer some preliminary thoughts on them below. But there’s no doubt I’ll have to think more about them.

* On the log lifter case, I had imagined someone who reasoned along these lines: “Well, I can’t lift it without intending to bend my knees, but I can surely try to do so without intending to bend my knees – after all, I just did!” (perhaps said when reflecting on a previous attempt in which she got distracted and failed to intend to bend her knees). I think I share Kieran’s suspicion that there’s something wrong with this line of reasoning, but I’m not sure that I’d go so far as to say that it’s theoretically irrational for her to believe she could try to lift without intending to bend her knees. Perhaps this belief is false. Perhaps it’s based on a misguided theory of attempts. But I’m not sure I see any theoretical incoherence here. But the problem is that the cogntivist needs to have it be that there’s theoretical incoherence here in order to deliver the verdict that this person is irrational.

* Kieran notes that he doesn’t take cognitivism to provide a general account of the requirements of practical rationality, but rather just an account of the instrumental principle, which he takes to be importantly different from the rest. That seems fine to me. But there’s still a lingering worry – one that Michael Bratman has raised – about what would explain those other requirements (such as a consistency norm on intentions, the Enkratic Principle, etc.) and whether that explanation would also carry over to Means-Ends Coherence. But that would depend on what those other explanations turn out to be like.

* Regarding my second point, Kieran agrees with the implications for cognitivism concerning symmetry, namely that cognitivists are committed to an asymmetry in response. But he argues that this isn’t a problem. In his view, when one is instrumentally incoherent, one should rationally respond by giving up the end. At first glance, this might seem counterintuitive, especially if we focus on the standard examples of instrumental rationality. (Here’s one: Al intends to pass the test, believes he must intend to study to do so, but doesn’t intend to study. Surely Al could, perhaps by reflecting on the good reasons to pass the test, escape his state of instrumental irrationality by forming an intention to study.) But, as Kieran observes, some formulations of the instrumental requirement hold that it applies only when Al believes he must *now* intend to study if he’s to pass the test. And that’s done in order to take into account the phenomenon of rational self-trust: Al might not presently intend to study, but, without being irrational, trust that he will do so in the future.

Here I think Kieran and I have different conceptions of what instrumental rationality requires. On his view, Al isn’t irrational until the moment he believes that he must now intend to study if he is to pass. And, at that moment, it seems that the only rational way forward is to stop intending to pass, since it appears time has run out. On my view, Al could be instrumentally irrational before that moment, and escape that instrumental irrationality by either intending to study or by abandoning the end. I think my view better accommodates the intuitive cases we have in mind when we discuss instrumental rationality (whatever that’s worth). But I would have to provide a formulation of the requirement to accommodate cases of rational self-trust. But there might be a way to do this without requiring Al to believe he must now intend to study if he’s to pass before the instrumental requirement applies to him. Perhaps we could add some clause along the following lines: unless Al trusts that he’ll form the intention at the relevant time, rationality requires that he either intend the means, abandon the end, or revise his instrumental beliefs. That accommodates rational self-trust cases, without having it come out that Al is instrumentally irrational only when time seems to him to have run out – that is, only when he believes he must now intend to study if he’s to pass.

Much of this is speculative, and would need to be developed further. But there might be a way to meet the objections. Either way, I’m very grateful to Kieran for his excellent comments on the paper.

Hi Avery,

Thanks for that helpful comment! I’m not too familiar with the literature on trying, but I like your critique, and modification, of the Hornsby account. There’s a lot to think about in your comment – and this question only takes up part of what you say above – but I’m wondering what you think about the other alleged counterexample to the Strong Belief Thesis I mention: Michael Bratman’s case of the bicyclist who, knowing his absent-minded tendencies, intends to stop by the bookstore, but is agnostic about whether he will.

Here’s what I’m thinking: Suppose we say that Bratman’s bicyclist intends to try to stop by the bookstore. By the Strong Belief Thesis, according to which intending to x involves believing one will x, it should follow that the bicyclist believes he will try to stop by the bookstore. But if we work with Hornsby’s account of trying, or your improved version, it doesn’t seem to be the case that he believes he will try to stop by the bookstore. Rather, he seems to be agnostic about whether he will try. (For instance, on Hornsby’s account, trying involves doing all in your power to X, and so, as you point out, “intending to try to X entails intending to do all in your power to X.” But Bratman’s bicyclist is agnostic about whether he will do all in his power to X.) So, it would still come out that the Strong Belief Thesis is false.

This is a wonderful paper by John and a great opportunity to reflect on the broader motivations at work in this debate over the nature of practical reason. I’d like to pick up on the exchange between Kieran and John as to whether there should be any pressure toward generality in our account of the putative rational requirements.

One powerful motivation behind Cognitivism, which I see in Harman’s seminal paper, is by nature general: to reduce the number of mysteries that plague us concerning good reasoning. If good practical reasoning could be understood in terms of good theoretical reasoning, then we would have explained away a major part of a puzzling phenomenon: how there could be genuinely normative, formal constraints on thought. I’m sympathetic with John’s worries about the ability of Cognitivism to succeed in this general ambition. But Kieran suggests that the Cognitivist need not have such a general ambition, and might only aim to explain what is distinctive about means-end incoherence and blatant intention inconsistency. And here, I take it, the idea is that the principles forbidding these states seem to have some normative force for us, but we don’t want to allow that it is the normativity of practical reason, on pain of allowing the illicit bootstrapping of reasons. So the solution is to claim that the normativity is theoretical rather than practical.

I wonder, though – is there really something so distinctive about these very narrow requirements, such that we’re willing to grant that the explanation of them could be totally different from other, similar pressures on reasoning? With respect to means-end coherence, it’s odd to think that something very special happens at the last minute, such that an agent who waits until the eleventh hour to intend the means and must therefore abandon his end has done nothing instrumentally irrational (maybe he has made an epistemic mistake in trusting himself to form the intention, but that doesn’t to be an *instrumental* mistake). But if the Cognitvist is satisfied with what can be accomplished by Very Weak Belief, then we can’t say that the defect in such a case is that the agent has knowingly done something that culminated in his being unnecessarily forced to give up his end – that he has in full awareness undermined his own goal. Rather, whatever we say about the process leading up to the last minute, the *irrationality* lies in a new and very different kind of epistemic mistake that occurs just at the moment the action becomes impossible (by the agent’s lights, at least). This seems to me to be such an unhappy result that it’s better to be a Myth Theorist about practical rationality, denying that there are any formal requirements of practical reason, rather than being a Cognitvist about only a few very narrow such constraints. After all, the Myth Theory also avoids the bootstrapping worry.

So basically, John, I’m agreeing with you that the Cognitvist should be troubled by the lack of generality. But maybe I’m overlooking a motivation to be a Cognitivist, aside from a general concern for parsimony and a wish to avoid illicit bootstrapping? I’d be very glad to hear what others think about this.

P.S. I'm embarrassed by the number of times "Cognitivist" is misspelled in my post ... I'm blaming this on autocorrect, although I'm not sure what it would have had in mind.


Thank you for your reply. I can see why you would think that the cyclist poses a problem for the Hornsby account. But I don’t think that it poses a problem for my suped up version. If I understand you correctly, your point is that we can imagine a case in which there is simply no X such that the cyclist believes that X is necessary for stopping by the bookstore and truly believes that X is within his power. Because he is aware of his own forgetfulness, Bratman’s cyclist fails to believe that there is some X such that X is necessary for stopping by the bookstore and X is under his control. But it does not follow from this that the cyclist does not believe he will try. This is because on my account, in contradistinction from Hornsby’s, successfully trying to go to the bookstore does not require doing what is necessary and in your power to go to the bookstore. It only requires doing what you believe to be necessary and (truly) believe to be in your power. However, in the case of the cyclist, there is no action that meets this specification. After all, the cyclist’s awareness of his forgetfulness ensures that he lacks the belief that any X that is necessary for stopping by the bookstore is also under his control.

Now, recall that TRYING is a biconditional whose right-hand-side is a conditional. Since the cyclist does not believe that there is any X that is both necessary for stopping by the bookstore and under his control, it follows that the antecedent of the embedded conditional turns out to be false, making the conditional true. In other words, in the case in which an agent lacks the belief that there is some X such that X is within their power (which, as we just saw, is true of the cyclist), trying requires no salient action on the part of the agent. The conditional is simply made true by the falsity of the antecedent. Hence, there is no special challenge to the cyclist believing he will try. Indeed, the cyclist is allowed to be even more confident that he would successfully try given that the requirements for trying have been significantly reduced due to his agnosticism about his own abilities. In short, your characterisation of Bratman’s cyclist actually makes his believing he will successfully try easier, given my account. All he has to do is believe that he will make the embedded conditional true, a task made easier given the falsity of the conditional’s antecedent.

Thanks to John for the interesting paper, Kieran for the interesting comments, and Pea Soup for hosting.

I'd like to ask a question about something going on in the background of this debate. Namely, what is required of the cognitivist in order to provide a successful explanation of Means-End Coherence? It seems like nearly everyone in the debate (pre-John) thought that the cognitivist's task was to show (i) that if one is means-end incoherent, then one is violating some epistemic rational requirement. In the last section of John's paper, he says that even if the cognitivist can vindicate (i), she cannot explain Means-End Coherence.

John doesn't tell us what a successful explanation would look like, but he tries to provide some evidence that a successful explanation is not forthcoming. The first consideration is that it doesn't seem like the cognitivist will be able to explain all of the practical rational requirements. Indeed, it doesn't look like the cognitivist can explain all of the rational requirements governing intention (e.g., enkrasia). This doesn't seem like a very pressing concern to me given the fact that cognitivists are generally motivated by a fairly specific connection between intentions and beliefs. Given the specificity of that connection, it's a bit odd to me to think that a successful cognitivist explanation of requirements like Means-End Coherence requires a story about the other practical requirements. This is because requirements like Means-End Coherence immediately look like they will have some connection to the beliefs that the cognitivist about intention is interested in.

The second piece of evidence has to do with which routes of escape are permitted. John argues that in too many cases the cognitivist view predicts that the only rational way to get out of means-end incoherence is by dropping one's end by dropping one's belief that one will do what's intended. This is because in most cases where one is means-end incoherent (and thus, by our assumptions, is epistemically incoherent), one's assessment of one's reasons will rationally demand that one drop the belief that is part of one's intention for the end. This is a bad prediction, according to John, since we are often rationally permitted to adopt the means rather than drop the end.

This does seem to me to be a serious problem for the cognitivist (albeit one that the literature on practical knowledge provides resources for solving since what the proper basis of beliefs about what one is doing has been hotly debated since at least Anscombe). However, it's not clear to me how this calls into question the cognitivist's explanatory claim. After all, if John accepts (i) and the claims about the nature of intention that come along with it--even for the sake of argument--and John accepts that very often one's assessment of the reasons will demand giving up the belief involved with the intention for the end, then John is committed to such a symmetry obtaining. Dropping the relevant belief guarantees that one lacks the intention. So if one is required to drop that belief, one is required to do something that will guarantee that one will no longer have the end. So it seems to me that rather than cast doubt on the cognitivist's explanatory claim, this problem must cast doubt on (i) and the claims about the nature of intention that the cognitivist uses to secure (i).

The upshot is that John's discussion of the explanatory claim does not help me understand what a successful explanation requires. Here are some thoughts that will hopefully spur some discussion.

Minimally, you'd hope that the cognitivist story would guarantee the truth of Means-End Coherence. How might it do that? You might think--as many seem to tacitly think--that (i) itself vindicates the truth of Means-End Coherence. After all, it follows from (i) that, in all the worlds where one is rational, the conditional that the requirement operator scopes over in Means-End Coherence is true and in all the worlds where that conditional is false, one is irrational. So, in order to be rational, the conditional that the requirement operator scopes over in Means-End Coherence needs to be true. It's natural to think that this is enough to secure the truth of Means-End Coherence. As I read the literature, this is the kind of reasoning that leads people to think that the cognitivist view, if successful, explains requirements like Means-End Coherence.

I myself think there are problems with this line of reasoning. There is a tacit premise--that one is rationally required to make true all of the things that must be true if one is to be rational--that I think is dubious. Instead of going directly into that discussion, I'd like to point out that even if that line of reasoning were sound, at best it would turn out that the relevant epistemic requirements entailed the practical ones. Generally, though, people think that mere entailment doesn't amount to explanation in a particularly interesting sense. So it's not clear that this story will really get at the heart of the matter either.

I take it that John agreed with this last claim when he wrote section 3 of his paper. After all, there he seems interested in whether the cognitivist has successfully explained Means-End Coherence even if (i) is true. I'd be interested in hearing from John (and others) about what kind of explanation he was investigating.

Hi Errol,

Thanks for your comment! That’s very helpful. Here are some initial thoughts:

* Regarding the issue of generality, I agree with you that it’s less pressing for the cognitivist to account for rational requirements like Enkrasia, but I would be worried if the cognitivist about Means-Ends Coherence can’t also account for norms of consistency on intention– both what Sarah calls the blatant consistency requirement (a requirement not to intend X and intend not to X at the same time) and a requirement to have our intentions fit with our beliefs into a consistent picture of the future (a requirement not to intend to X, intend to Y, and believe if one Xs, one will not Y). (There might be worries about how to formulate these requirements, but put that aside.) And I also very much like Sarah’s point regarding the peculiar narrowness of the epistemic irrationality involved on the cognitivist’s account of instrumental rationality.

In short, if Enkrasia can’t be explained, that might be fine. But, given the strong similarities between intention consistency and means-ends coherence, I do think there’s a worry if only one of them can be explained on a cognitivist account. And there’s a worry also if only a subset of those cases that we’d pre-theoretically classify as cases involving instrumental irrationality can be explained by cognitivists.

Here’s what I was thinking about why (i) [if one is means-end incoherent, then one is violating some epistemic rational requirement] wouldn’t be enough to secure the claim that the theoretical requirements explain the practical ones. Consider an analogy: it might be the case that whenever I violate some moral requirement – let’s say a moral requirement to keep my promises to you – I also violate a requirement of prudence, since you’re a reliable detector of promise-breaking and will punish those who break promises. But we can cast doubt on the claim that the moral obligation here is explained by the prudential one. We can do so by considering similar obligations – perhaps my obligation to keep promises to someone who isn’t a reliable detector – that aren’t explained by prudence. So, in the same way, I was thinking that even if whenever one is means-ends incoherent, one violated some epistemic requirement, we could cast doubt on the idea that the epistemic requirements explain means-ends coherence by showing how they couldn’t explain closely related requirements, like intention consistency.

Now, the analogy here isn’t exact. (For one thing, in the promising case, it seems like the two cases don’t involve merely similar obligations, but the same obligation – namely, the obligation to keep your promises). But the case illustrates how I was thinking the generality issue was relevant to the explanatory claim.

* Let’s turn to your point about the symmetry worry. I think you might be right. It might be that the argument casts doubt on the idea that (i) is correct, rather than on the explanatory claim. But, for what it’s worth, here’s why I put the point that way. Cognitivists usually present their view as the view that certain practical requirements are explained by theoretical requirements. If that’s the view, I think they can’t just select out some theoretical requirements, like Closure, and consider how they would apply, while ignoring other applicable theoretical requirements. So, if they hold that theoretical requirements (taken together) explain Means-Ends Coherence, they’ve got to consider how they all would apply to this case. And that’s how you get the counterintuitive asymmetry result.

A cognitivist could avoid the counterintuitive asymmetry result by saying that the requirement to revise in light of one’s assessment of the theoretical reasons doesn’t apply in this case. And that cognitivist could still insist that (i) is true. But then I don’t think you’re allowed to claim anymore that means-ends coherence is explained by the norms of theoretical rationality. (It only appeared to be explained by those norms when you considered some of them in isolation.)

It was the possibility of this sort of move that led me to frame the argument as a challenge to the explanatory claim, rather than (i).

I agree with you that there are larger issues about explanation in the background here, and I’d love to hear more from you and others on this.

A few comments on John's fascinating paper:

(1) I'm a little confused about John's stance on the Strong Belief Thesis. If I read him correctly, he isn't claiming that this thesis is false, but rather that either it's false or it can't provide a maximally general account of the means-end coherence requirement (since a maximally general account would have to apply to aim states that don't satisfy the Strong Belief Thesis). Now suppose John is right about all this. It seems to me that, in this case, the cognitivist should opt for the second horn of John's dilemma, and accept the Strong Belief Thesis while granting that it isn't required for explaining the means-end coherence requirement. The cognitivist could say something like the following:

There are planning states and there are aim states. The Strong Belief Thesis applies only to planning states. These two kinds of states are governed by somewhat different rational requirements. In particular, the consistency requirement applies only to planning states, not to aim states. As Bratman's video game case illustrates, it seems that one could aim to do two things that one knows to be incompatible (e.g., in playing a video game, one could aim to destroy the enemy by firing the gun operated by the left controller, and one could likewise aim to destroy the enemy by firing the gun operated by the right controller, even if one knows that the game is set up in such a way as to make doing both impossible). We need the Strong Belief Thesis to explain the consistency requirement, and luckily this Thesis is true only of states to which consistency requirement applies, namely the planning states.

Now, while the consistency requirement applies only to the planning states, the means-end coherence requirement seems to apply both to both kinds of states. Or rather, there is a separate means-end coherence requirement that applies to each. One of these requirements concerns the relation between means-plans and end-plans (roughly, if you plan the end you must plan the means), and the other requirement concerns the relationship between means-aims and end-aims (roughly, if you aim at the end you must aim at the means).
Moreover, while the Strong Belief Thesis applies only to planning states, the Very Weak Belief Thesis seems to apply to both planning states and aim states. So if the Very Weak Belief Thesis is enough to account for the means-end coherence requirement, then the cognitivist can appeal to the Very Weak Belief Thesis in explaining the means-end coherence requirement (thereby explaining why it applies to both types of states) while appealing to the Strong Belief Thesis in explaining the consistency requirement (thereby explaining why it applies only to planning states).

If this is right, then we can avoid the conclusion that the best account of means-end-coherence will leave us unable to provide a cognitivist account of the coherence requirement. And so we can avoid much of the force of John's lack-of-generality argument.

(2) John criticizes the explanation of the means-end coherence requirement that appeals to the Strong Belief Thesis because he says it doesn't apply in all the cases where it seems such a requirement should apply. But the same thing seems to be true of John's explanation. For his explanation applies only in cases where one is certain that, unless one intends the means one will not carry out the end. But this seems too narrow. In "How To Be a Cognitivist about Practical Reason" I give the example of Daria the darts player who is such a skillful player that she knows she can hit the bulls-eye at will. She also knows that the only way she can win the darts game is by hitting the bulls-eye. And she intends to win the game. However, she has no intention to hit the bulls-eye. Rather, she just throws the dart haphazardly in the direction of the dart board, knowing that in so doing she is likely to miss the bulls-eye. This seems to be a clear case of irrationality, and, indeed, of means-end incoherence. But this is not a case where Daria is certain that, unless she intends to hit the bulls-eye, she won't win the game. For she recognizes that, even if she lacks the intention to hit the bulls-eye and throws the dart haphazardly, she might nonetheless get lucky, hit the bulls-eye, and win the game. Hence, the kind of means-end coherence requirement that John's account is meant to explain is too narrow to apply to this kind of case.

(3) I'm not convinced by the direction-of-revision argument. THis argument is meant to show that the cognitivist account can't propery explain the means-end coherence requirement because it makes the wrong predictions about how we should revise our attitudes when they are means-end incoherent. John argues that the cognitivist account has the false implication that we should revise by dropping the end-intention, whereas John argues that it would be fine (and perhaps even best) to revise by forming the means-intention.

However, it seems to me that we need to distinguish between two periods of time. Let t be the first time at which the agent believes that, unless she already intends the means at t she will be unable to achieve the end. Once time t has arrived, I think Kieran is right that the rational thing to do is to drop the end-intention, since, by the agent's own lights, it is now too late to carry out this intention. Of course, prior to time t, the agent doesn't seem to be rationally required to drop the end-intention. But the cognitivist can allow this. For, prior to time t, the agent's beliefs (including the belief that she doesn't yet intend the means) will be perfectly consistent with the agents not being certain that she won't achieve the end. Hence, on the cognitivist view, prior to t, the agent's attitudes will be perfeclty consistent with intending the end.

Thus, like Kieran, I'm not moved by the direction-of-revision argument. If the cognitivist has a problem, it's not that she makes the wrong prediction about how one should revise one's attitudes. Rather, it's that she has trouble accounting for why one should revise one's attitudes prior to t.

Thanks for John's paper and his kind visit to Avery's seminar that I am taking this semester. Based on the discussion in our meeting, I have a few preliminary and speculative thoughts.

(1) With the help of above discussion, I think I now have a better understanding of John's case of intending to try to lift the log. I think John is correct in asserting that even if the person is not theoretically irrational because she may just have a misguided theory of "trying". But if the case is understood in this way, I no longer have the intuition that the person is criticizable for being means-ends incoherent (so there's nothing to be explained here). John's case would be analogous to the following one: a soldier is ordered by his commender to try his best to destroy an enemy bunker when it seems highly unlikely that he would succeed. So the soldier only forms an intention to try his best to do so. But he has a different and wrong idea of what it means by "trying his best". He thinks it means "trying everything short of losing his life", but the commander means "trying everything including losing his life". Now, the rest of the case is the same as John's case, the solider believes that losing his life is necessary for destroying the bunker, while unnecessary for trying his best to destroy the bunker. And he doesn't intend to lose his life. I don't think this soldier should be criticized for being irrational or incoherent in any sense when he comes back alive, failing to destroy the bunker and mistakenly thinking that he at least tried his best.

(2) With respect to the direction-of-revision argument, I have a puzzle even if I tend to agree with John that it is possible (and more usual) to be means-ends irrational prior to the last moment of forming the intention. My question is why there is a one-way rational pressure to give up the end and the belief involved. To use John's example, surely, there are good evidences that the person doesn't (and won't) intend to buy the airplane ticket. But on the other side, he could have a an important job interview in New York, and that's why he intends to go there. If so, intuitively (beyond mere intuitive basis, a potential complexity will be raised about the nature of intention-based belief, what is its evidence and how to justify it), he also has good evidences for him to NOT believe that he certainly will not travel to New York. In other words, if certain belief is based on intention, shouldn't it be prima facie plausible that a better intention makes a better belief? Maybe I miss something obvious, but it seems puzzling to me why the pressure to revise certain attitude in such a case is coming in one direction.

Hi Avery,

Thanks for that reply. Here’s why I was thinking that the appeal to your account of trying, like the appeal to Hornsby’s account, wouldn’t help save the Strong Belief Thesis from Bratman’s absent-minded bicyclist.

On your account of trying, one tries if and only if a certain conditional is true. And in intending to try, one intends to make that conditional true. My worry was that the absent-minded cyclist might intend to make that conditional true, without believing that he will make that conditional true. And so the Strong Belief Thesis would be false. But whether that’s so might depend on how we understand the details of Bratman’s case.

Bratman’s bicyclist might believe that turning left in 5 minutes is necessary for going to the bookstore, and believe that doing so is under his control, but he might anticipate going on “autopilot” and getting distracted. (I was thinking that in anticipating going on “autopilot,” he isn’t anticipating either of these beliefs disappearing; he thinks he will still believe that turning left in 5 minutes is necessary, and that turning his bicycle is under his control, but he just anticipates not having these beliefs come to mind, or play their appropriate role in deliberation, when the relevant time comes.) So, I was thinking that if we say that he intends to make that conditional true, it could still be that he’s agnostic about whether he will. And so the Strong Belief Thesis isn’t true.

I think you were thinking that in anticipating the possibility he might go on “autopilot”, he’s anticipating a change in belief. But I was imagining the case such that in anticipating this possibility, he’s not anticipating a change in belief. If we describe the case your way, the Strong Belief Thesis would be safe. But if we describe it my way, it would spell trouble for the Strong Belief Thesis. So, how we describe the example seems to matter here.

Hi Jake,

Thanks for that great set of comments. Here are some thoughts in reply:

(1) This is very helpful. You’re right that in arguing against the Strong Belief + Closure version of cognitivism about Means-Ends Coherence, I had argued for the disjunctive view that the Strong Belief Thesis (SBT) was either false or unable to yield a maximally general account. But the strategy won’t be enough to challenge the account you suggest here, where we say that the SBT applies to planning states, governed by a consistency requirement, and the Very Weak Belief Thesis is used to explain the Means-Ends Coherence requirements on both plan-states and aim-states. (I should add that this strikes me as a very interesting idea and probably the best way for the cognitivist to go.) To argue against the view you suggest, I’d have to fall back on a skepticism about the SBT (or point to the usual worries regarding ignorance, or false belief, about one’s intentions, etc.).

Despite being neutral on the SBT for the purposes of that argument against the Strong Belief + Closure version of cognitivism about Means-Ends Coherence, I am skeptical about the SBT. I don’t think I present anything close to a conclusive case for that skepticism in this paper, since that’s way too large of a task. (But I do at least try to show that Velleman’s four arguments for the SBT aren’t convincing.)

(2) I like your Daria case. According the requirement of Means-Ends Coherence (MEC), which is the requirement the cognitivist is aiming to explain,

Rationality requires that [if one intends to E, and believes that one will E only if one intends to M, then one intends to M].

Since Daria doesn’t believe she will win the game only if she intends to hit the bullseye (she knows she might hit it unintentionally with the haphazard throw), Means-Ends Coherence doesn’t apply to her.

I don’t think that the case shows that there’s a problem with the proposed cognitivist explanation of MEC. Rather, it shows that there are cases that we’d intuitively classify as cases of instrumental irrationality that aren’t covered by Means-Ends Coherence. And the cognitivist, it seems, should have something to say about these cases. (This seems to me to be another way to press the generality objection against the cognitivist.)

(3) This is also very helpful. The cognitivist can avoid the direction-of-revision argument by saying that cognitivism issues the right prediction for the cases where it seems (to the agent) that time has run out. If we limit ourselves to just those cases, then everything is fine. But this would only make the generality worry more pressing. We want some explanation of the rational requirements that apply to those agents who haven’t yet reached the point of no return, like my example of Al above, and your example of Daria (who, I’ll assume, prior to throwing, could change her mind and intend to hit the bullseye). So, I agree with both you and Sarah that the problem for cognitivism becomes what to say about these cases.

Hi Albert,

Thanks so much for joining the discussion, and for your two comments. Some quick thoughts in reply:

* In the log-lifter case, I was thinking that someone who aims to lift the log (and we’re being neutral for the moment on whether he intends to lift it or merely intends to try to lift it) and believes he must intend to bend his knees if he’s to lift it (though perhaps he believes so intending isn’t necessary for trying to lift it), but doesn’t so intend, can be criticized as irrationally incoherent. But you don’t agree with that verdict for this case, and think it’s analogous to the soldier case, where the soldier intends to try to destroy the bunker, and he believes that this requires him to intend to do everything short of losing his life, and he intends to do, and does, everything short of losing his life. I agree that the soldier case doesn’t involve incoherence. So the question I now face is what the difference might be. Here’s what I think: in the log-lifter case, even if we say that he only intends to try to lift the log, there is still some goal or aim to which his activity is directed (lifting the log), and he can be criticized for not forming those intentions he believes he must form to achieve that aim. But the soldier doesn’t have any aim other than doing what he can to destroy the bunker without losing his life, and he forms all the intentions he believes he must form to achieve that aim.

* As for the worry about the direction-of-revision argument, I do think that there’s a sense in which the important job interview in NY could be evidence: it could be evidence that he ought to (intend to) buy the plane ticket. But that, by itself, isn’t evidence of what he will do. Regarding that question, the evidence seems to support the hypothesis that he certainly won’t travel to NY.

Hi Avery,

Thanks for your discussion in the class. And here I have some concerns regarding your revised account of Hornsby's "trying".

(1) The biconditional seems to be false because the if-conditional seems false. It could be a case that someone is ready to try since she will do Y if Y is necessary and under her control even if no such Y exists at the moment. If so, she is not trying but merely disposed to try or ready to try. I take part of the motivation of Hornsby’s account of “trying” is to reject an internal sense of “trying”, while the revised account above still seems more or less “internal”.

(2) My second concern is less straightforward so that I am not sure about it. But here it is: as for the only-if-conditional, it might be a true statement. However, my worry is that it seems too weak as well. As true as it appears to be, I agree that it can be used to show that the person in John’s example of intending to try to lift the log is not actually trying because he doesn’t do what he believes to be necessary and under his control. But to revise the case slightly, assuming he doesn’t intend to bend his knees because he is too afraid to do so (maybe due to some severe physical wound), afraid to the degree that it is not under his control to intend to bend his knees. It seems to me that the person is not trying to lift the log in this revised case as well, even though he has a good excuse. But the only-if-conditional in your account won’t be able to explain this fact since the necessary condition is actually satisfied.

To put it another way, as you mentioned earlier, “trying ONLY requires doing those things that are in one’s power or under one’s control”. This thought, although not deduced from the only-if-conditional, is somehow hinted or suggested by it. But it seems implausible because it seems that trying sometimes requires doing things that are beyond one’s power or control. My choice of case would be this: a drowning person is trying to stay alive, and what is required for trying to stay alive is just doing what she is doing at the moment, that is, her flailing bodily movement. But because she lacks any swimming ability and she is extremely panic and scared, her flailing bodily movement is beyond her control, that is, she doesn’t do it voluntarily.

How do you think about this concern?

Hi everyone,

I guess this discussion wraps up sometime this evening. I’m about to leave the office, and I’m not sure I’ll be able to check in later. But I wanted to make sure I got a chance, before the comments closed, to thank all of you for carefully reading and commenting on my paper. I’ve really enjoyed this discussion and learned a lot from your questions and objections. And thanks especially to Hille for organizing this, and to Kieran for the commentary.

Best wishes,

Thanks to you, John! A last few thoughts:

Errol, I also found your comments really helpful. I agree that even if it could be established that an epistemic requirement is violated every time some practical requirement is violated, this would not amount to an explanation of the practical requirement. At most, it would be reason to think that there might be some further explanation in the offing.

It seems to me that one thing John’s paper helps us to notice is that the Cognitivist needs to have at least a sketch of a story about what explains the requirements of *theoretical* rationality, and that this debate is difficult to prosecute when we abstract away from the story any particular Cognitivist account offers. It’s unclear whether the objections of Section 3 succeed without looking at the details. But something I think John’s argument successfully shows is that a Cognitivist cannot simply be a quietist about the theoretical requirements, and refuse to give any further account. This kind of view really would be vulnerable to John’s charge that it succeeds in reducing practical to theoretical rationality only by picking and choosing some theoretical requirements and ignoring others, in an ad hoc way. Since a number of people do want to treat rational requirements as basic, I think it is a substantive insight that you cannot take this line and still be a Cognitivist.

Ironically, one of the most prevalent views explains theoretical rationality as an instance of instrumental rationality: the norms are those that best serve our epistemic goal of having accurate beliefs/credences. Appealing to a general notion of instrumental rationality isn’t the same as appealing specifically to the Means-End Coherence principle on intentions, so there wouldn’t exactly be a vicious circularity in being both a Cognitivist about practical reason and thinking of epistemology in terms of epistemic goals. But I take it this wouldn’t be an especially well-motivated combination of views, so that’s another way in which the explanatory claim probably can’t go.

A couple people have mentioned that Cognitivists deny that the beliefs involved in intending must be constrained by prior evidence, and so the “asymmetry” objection fails. The explanation of this tends to appeal to truth as a standard of correctness on belief – that while we require evidence to form normal beliefs, because we need evidence to guide us toward truth, we do not require it to form intentions, because there the truth is up to us. So maybe we could put the question this way: if the Cognitivist uses this kind of consideration to explain away the evidence requirement in the case of intention-beliefs, will the same argument extend to any other instances in which a rational requirement on belief doesn’t apply to intention-beliefs? If not, then I think John’s “picking and choosing” objection will rear its head again.

Thanks for your replies, John. Looks like we're on the same page. Really looking forward to the book!

Hi John,

Apologies for the delayed reply. I had to give a talk yesterday, so I wasn’t able to respond earlier. Since this comment is being posted after the October 2nd time frame, I don't expect you to post a reply. However, I thought I would at least offer a response to your last comments in the hopes that it would be helpful as you continue to reflect on the topic. But first, I want to say thanks again for taking the time to engage in this discussion, and thanks to PEA soup for hosting.

I suspect that we may be working with different conceptions of what it means for X to be “under one’s control”. I believe there is a widely held pre-theoretical intuition that one can never be in doubt about whether or not one is able to try. I also think there is a pre-theoretical intuition that the only thing that could prevent someone from successfully trying is their changing their minds. In other words, trying is the one thing that is entirely up to us. This, I suspect, is why many theorists have been attracted to a purely internal (i.e., mentalistic) conception of trying. They wish to preserve the idea that, though the world may conspire to prevent our efforts from succeeding, we can at least be confident in our ability to try. But if we have a factive belief condition attached to our conception of trying (which TRYING does), then trying need not be limited to purely internal states in order to preserve the aforementioned intuitions. Whether or not you agree that these are pre-theoretical intuitions worth preserving, these are in fact the sorts of intuitions that the “S truly believes X is under S’s control” clause of TRYING are meant to preserve. This means that the notion of control at play here is one that is incompatible with believing X is under one’s control while also being unsure that on will X if on sets out to do so, short of changing one’s mind.

In the case of the cyclist, insofar as the X that is necessary for stopping at the bookstore is something he is unsure he would do because he may go on autopilot, it is not something the cyclist truly believes to be under his control in the relevant sense. If it were, then the cyclist’s awareness of his tendency to go on autopilot would not be enough to undermine his belief that he would do X. I think this presents us with a dilemma. On the one hand, for any X, if we say the cyclist is unsure he will do X because of his awareness of his tendency to go on autopilot, then the antecedent of the conditional will be false with regards to that X. The cyclist would not believe that X is up to him in the relevant sense. If, on the other hand, we say that the cyclist is sure he will do X, then the Strong Belief Thesis presents no obstacle to him intending to X, where X is necessary for trying.

Finally, it may be helpful to note that the embedded conditional is most charitably understood in de re rather than de dicto terms. The claim is not that the cyclist has a belief with the content “I will successfully try to stop by the bookstore”. Since having such a de dicto belief would implicate the cyclist’s own conception of trying (which need not be the same as TRYING), there is no reason to assume he would have such a belief. Rather, the claim that is being made is that for any X that satisfies the antecedent of the embedded conditional, X is something he must believe he will do in order to count as intending to try. But again, if he is unsure he will X because of his tendency to go on autopilot, then X would fail to satisfy the antecedent of the embedded conditional, which would mean that X is not something the cyclist must believe he will do in order to count as intending to try.

As we discussed in class, I do agree that the biconditional is too strong, albeit for different reasons to those you stated in the above comments. Since the cognitivist need not share Hornsby’s burden of claiming that trying is an external rather than internal act or event, the mere fact that biconditional allows for purely internal tryings does not prevent the cognitivist from appealing to TRYING. Indeed, when I put forward TRYING as an account of trying, I was not aiming to preserve this aspect of Hornsby’s account.

My reason for thinking the biconditional is false has to do with the fact that the antecedent of the embedded conditional may be satisfied in cases in which an agent simply lacks the belief that there is any necessary condition for bringing about some end that is under her control (which is also a point you brought up in class). If the biconditional formulation were true, it would mean that I would count as trying to turn invisible simply because there is nothing I believe to be necessary for turning invisible that is also under my control. Unfortunately, this would be of little help to John (in the context of the present discussion) since we can simply switch from a biconditional (with a conditional on the right-hand-side) to a conditional (with a second conditional as its consequent). Given this revision, TRYING would simply stipulate a necessary condition for trying, and the reply to John’s objection would proceed just as before.

Regarding your second concern, I would agree that your revised log-lifter satisfies the necessary condition specified by TRYING if we assume he believes that bending his knees is not under his control due to the pain attached to doing so. But a word of caution: it is not enough for him to think that bending his knees would be uncomfortable or painful. This is because he may think that bending his knees would be uncomfortable or painful and yet also believe that it is under his control, in the relevant sense. Now, it sounds like in your re-imagined case, the log-lifter sets out to see if he can lift the log without bending his knees (given the pain attached to doing so). If so, then this should be included in the content of his trying. He intends to try to lift the log without bending his knees. But insofar as he also believes that bending his knees is necessary for lifting the log (and that actually lifting the log is therefore impossible without bending his knees), it seems natural to assume that he no longer has an intention to lift the log. He only has an intention to try to lift the log without bending his knees. He is akin to the person who intends to try to lift a boulder in order to demonstrate that it cannot be lifted. Such an agent would not be violating Means-End Coherence since he now intends to try to lift the log without bending his knees, which does not rationally require that he intend to bend his knees. In short, I think the cognitivist would be correct to say that such an agent is not guilty of irrationality.

Regarding your final point, I think it is important to keep in mind that TRYING is offered as way of explaining why the log-lifter violates Means-End Coherence* even if he merely intends to try. As such, the cognitivist need not take any stance on the sufficient conditions for trying.

Thanks for a great and illuminating discussion, everyone! I'll close comments now.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.