Some Of Our Books

Categories

« Moral Concern De Dicto (Again) | Main | Austin Graduate Ethics and Normativity Talks (AGENT), October 10-11, 2014 »

June 17, 2014

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Hi Errol (and Jussi and Daniel),

Very cool paper. For reasons that are probably very local and uninteresting, I can't seem to access Daniel's paper so I'll have to go on Errol's reconstruction. I had two questions that I had hoped Daniel might answer (of course Errol, Jussi, or anyone who isn't me is free to take a crack at them).

First, in the transition from (C) to (C*) there's a transition from beliefs about facts to ways the facts appear to be. Should there be a belief requirement to the effect that p is S's subjective reason only if S is believed by p? I have a general worry here that can be brought out if we think about the case of perceptual belief. On the one hand, you don't want to say that the rationality of perceptual belief turns on believed reasons. (At least, I don't think so.) That's no problem if we broaden things to include ways things appear to be. But then there's a problem if we move to ways things appear to be and drop any doxastic requirement on possession. A natural gloss on a subject's reasons for V-ing is that they are the considerations in light of which the subject V'd, something that the subject takes to show that there's something favorable about V-ing, something that makes V-ing appropriate or sensible, etc. It's sort of hard to see how p could be S's reason for V-ing if playing this role means that it captures the light in which S took there to be something favorable about V-ing if p isn't something that S is committed to.

I guess, then, there's a sort of dilemma that arises for any view that links rationality to reasons of any kind. If you want to allow that perceptual beliefs are rational, it looks like you have to allow for subjective reasons that are non-doxsatic appearances. It's not clear, however, that such things can play the role of motivating reasons if the occupant of that role is something like a consideration in light of which the subject took her response to be favored, appropriate, fitting, etc. If, however, you want subjective reasons to be something like motivating reasons, it seems that a doxastic requirement is needed, in which case you can get perceptual beliefs to come out as rational only by severing the link between rationality and reasons.

The second question has to do with puzzles that arise when we think about beliefs about rationality, reasons, etc. What does the view say about cases in which the non-normative facts (known to the subject, say) give the subject no objective reason to X but it also appears to the subject that these facts are an objective reason to X? I have some general worries here about vindicating some intuitions about rationality while retaining something like the enkratic requirement, but that can be fleshed out later. Just not sure what role apparent normative facts play in determining subjective reasons and rationality.

I’d also like to thank PEASoup for giving my paper this attention. I really appreciate it. And thanks to Errol for his generous and stimulating comments. I’ll try to do them justice.

Errol raises a number of important critical points. I’ll address them in turn. Needless to say, I won’t be able to resolve the issues he raises here but I hope to indicate the directions one might take in addressing them.

1.
Errol suggests that a proponent of the counterfactual analysis might build in a knowability condition: A has a subjective reason to X iff, were the facts as they appear to A to be, it would be knowable a priori that the facts provide objective reason to X.

If the ‘counterfactualist’ makes this revision, her view would be closer to the one I advance. So, perhaps I could welcome it. But, as stated, the principle faces the same objection I level against the orthodox counterfactual analysis. For any case in which it is not (metaphysically) possible for the facts to be as they appear to A to be, we’ll get the ‘explosion problem’ (to use Errol’s nice phrase).

Another worry with the principle, as stated, is that it makes whether A has a subjective reason to X, hence, whether it is rational for A to X, depend on what she is in a position to know in the counterfactual scenario, rather than what she is actually in a position to know. That seems wrong.

Of course, if the principle as stated is problematic, a revised version might be available. I’ll think about what that revision might look like…

2.
As Errol notes, my suggestion is that whether a subject has a subjective reason to X depends on whether it is knowable a priori that, if the facts are as they appear, they give her an objective reason to X. This raises the question: knowable for whom? In the paper, I suggest that different answers to this question will deliver different (not necessarily competing) conceptions of rationality. (Compare: she’s not very smart/reasonable/clever for a Dean of the Faculty, though she’s smart/reasonable/clever for a philosopher.) For the most part, I work with the idea that it is the agent herself who must be in a position to know the relevant conditional a priori.

Errol worries that this entails that a subject has a subjective reason for Xing only if she has the concept of an objective reason.

For what it’s worth, I’m not very worried by this, since I don’t think such concepts are hard to come by or to exercise. That is, I’m not sure it requires a great deal of ‘conceptual sophistication’ to make judgements about reasons (if only implicitly). However, I grant it would be nice not to have to commit to, let alone defend, controversial views about concept possession for the purposes of the paper. (My colleague, Kurt Sylvan, has a good discussion of this in his own paper on the topic, ‘What Apparent Reasons Appear to Be’, forthcoming in Philosophical Studies.)

To avoid this, I could relativise knowability to something other than the agent in question – perhaps the relevant proposition must be knowable a priori for humans.

3.
Related to the relativity point, Errol gives the great (counter)example involving Bob.

The above move would deal with this case. Although Bob cannot know a priori that the facts (as they appear) provide him with an objective reason to get his wife chocolate, this is something that is knowable a priori for humans (or husbands, or whatever).

A rather different response to the Bob case is to insist that Bob is able to have the relevant knowledge. As Errol describes the case, Bob has a psychological block. So, we might say that Bob can know a priori that the apparent facts give him objective reason to get chocolate, hence that he has subjective reason to do so, though his psychological block is preventing him from exercising this ability. After all, as the case is described, his upbringing is blocking, not removing, his powers of normative reasoning.

That said, perhaps I shouldn’t get too hung up on the details of the case.

4.
Errol’s other counterexample, involving Vlad the mathematician, is the one that worries me the most. Before discussing it, I’ll say a bit about why.

In the paper, I wanted to focus on what it takes for an action to be rational, and set aside or keep at arm’s length the issue of what it takes for a belief to be rational. There are complications that arise in the case of belief that do not arise when thinking about action (for one thing, I don’t think we have a clear idea of what an objective reason for belief might be). However, I realise you can only hold this issue off for so long and Errol is right to press me on it.

Returning to Vlad… I need to think more about the case but I’m inclined to dig my heels in and insist that (i) and (ii) are not met. As the case is described, Vlad’s beliefs that p and that q are inconsistent. Moreover, this inconsistency is knowable a priori for Vlad. In that case, it cannot be rational for Vlad to believe both p and q. After all, he can know a priori that at least one of them is false.

Perhaps the inconsistency is not obvious. In that case, Vlad’s failing might be excusable and, more importantly, it might lessen the degree to which his beliefs are irrational (cp. pp. 14-15 of the paper on fine-grainedness). But irrational they remain.

Of course, this makes some assumptions about rational belief that I’ve yet to defend. As I said, arm’s length…

5.
Errol’s final point is about the rational belief constraint (arm’s length!!), according to which only rational beliefs contribute to determining one’s perspective, hence, to what subjective reasons one has. He points out that, as a simplifying assumption for the purposes of the paper, this might be okay, but if I hope to extend the account to rational belief I’ll be stuck with a vicious circularity.

I’m not sure the circularity here would be so vicious. The resultant account of what it is for a belief to be rational would be given in terms of how things appear to the subject to be, where how things appear is determined in part by what other rational beliefs she has. We would still have a non-circular account of the conditions under which a particular belief is rational, if not of rational belief in general. (Cp. Wedgwood’s avowedly circular account of rational belief in his influential paper, ‘The Aim of Belief’.)

That said, I realise this might do little to assuage worries. If I were to try to avoid the circularity, I’d probably follow Mark Schroeder’s proposal (in ‘What Does It Take to “Have” a Reason?’, which Errol refers to). Very, very roughly, Schroeder’s idea is that what a subject believes can contribute to making it the case that she has a subjective reason even if that belief is itself irrational – it’s just that that subjective reason is guaranteed to be defeated. Hence, it cannot make it rational for her to form further attitudes or perform certain actions.

That’s a bit abstract. I’d be happy to discuss this idea further or bring it down to earth with an example or two but perhaps that’s enough for the time being.

Thanks again to Errol for taking the time to prepare his comments. They’ve given me a lot to think about.

(I've just seen Clayton's comments - I'll respond soon...)

I'm not quite seeing how the explosion worry affects Daniel's account. Errol, are you assuming that an *indicative* conditional with an impossible antecedent is vacuously true? One could plausibly deny this, but maintain the standard view of counterfactuals with impossible antecedents.

Lee: I was mainly objecting to Daniel's reply to this sort of objection. Although he doesn't say explicitly, he must have been granting that indicatives with necessarily false antecedents are true. Perhaps the way to go is the way that you suggest.

Clayton

the paper should be freely and openly available at the JESP website (don't think there can be any institutional blocks. I tried to link directly to the pdf but you should be able to find Daniel's paper from their website:
http://www.jesp.org/

I might also try to respond to your first worry on Daniel's behalf (sorry Daniel). Daniel explicitly allows subjective reasons that are based on non-doxastic states (bottom of 8). However, the subjective reasons he is talking about are not motivating reasons. It might be that motivating reasons do have further conditions. This is just about subjective reasons ex ante - the kind of reasons that are supposed to determine on some views what it would be rational for an agent to do (before we know what the agent in fact did).

Daniel (and Errol)

I want to push Errol's main concern from slightly different perspective. We need to distinguish between the things that are subjective reasons and the relation they are in to our actions (call it the favouring relation).

What motivated many people when they talked about these reasons was that somehow the things that are subjective reasons depend on the agent's perspective on the world. This is why we go to the closest metaphysically possible world where things are as the agent beliefs them to be. However, these people wanted to remain objectivists about the favouring relations between the things that are subjective reasons and our actions. None of that was to depend on the person's thoughts about reasons for example. Part of this was motivated by the idea that an agent can act rationally (and on subjective reasons) even if they have not formed beliefs about those reasons.

Now the worry is that Daniel's view because it is fixing some other problems of the account returns not only the things that are reasons but also the reasons-relations in some sense back to less objective standing. They too now depend on the agent's or some larger communities views about reasons (and what they can know to be a reason. This is revealed by the structure of Daniel's account:

(E) a subject has reason to phi iff it is a priori (if p then q)

where p is 'the facts of the situation are as they appear to her to be' and q is 'those facts give her objective reason to phi'. Here the whole conditional where reasons are in the consequent are inside the scope of 'it is a priori that' where this refers to what the agent can know a priori.

So, now the worry Errol rightly points out is that not only the agent's perspective on the non-normative features of the world but also what she (and others) are able to know about reasons affects what subjective reasons the agent has. This seems to have some problematic consequences and I'm not sure what the benefits are.

The question then is why not formulate the view in a way that only the things that are subjective reasons depend on the epistemic possibilities but not what the reasons-relations are between? Would something like:

E* A subject has subjective reasons to phi iff the facts in the epistemic possibility that is closest to how things appear for the subject give her objective reasons to phi in that epistemic possibility.

work? This seems to solve the previous problems and avoid many of the new ones.

This should be a permanent link to the JESP page for the article. It has the abstract and a link for the PDF. Of course, the front page of JESP now has Daniel's article on it, but when PEA Soup readers are browsing through this posting twenty years from now...

I think I'll also point out (hereby) that the article now has 839 downloads. And that's not really atypical -- Elinor Mason's "Objectivism and Prospectivism about Rightness", published last year, has over 2000 downloads, and each of the first ten articles JESP published, in 2005 and early 2006, has over 5000 downloads.

I guess this is shameless promotion of JESP, since I'm not ashamed at all. Submit to JESP!

Jussi:

"...the worry Errol rightly points out is that not only the agent's perspective on the non-normative features of the world but also what she (and others) are able to know about reasons affects what subjective reasons the agent has. This seems to have some problematic consequences and I'm not sure what the benefits are."

Which consequences, and why are they problematic?

Thanks Clayton for the comments.

As Jussi has already said, my concern in the paper is with the conditions under which a subject has a subjective reason to X, not the conditions under which she Xs on the basis of or for that subjective reason. A subject might have a subjective reason to X, in this sense, even though she does not, and is not going to, X.

I’d have to think about whether, if what a subject perceives is a motivating (subjective) reason for her to X, she must believe it. I’m not sure…

Regarding the second question, for better or worse I claim (pp. 17ff) that subjective reasons are provided only by apparent non-normative facts. That is, I deny that apparent normative facts make a difference to what it is rational for a subject to do.

One motivation for this constraint is that it allows a proponent of the view I favour to deal with three envelope (mineshaft/Jackson) cases. A more principled reason is that subjective reasons are modeled on objective reasons. It seems to me plausible that (e.g.) the fact that I ought to go to the cinema is not itself an objective reason for going or does not provide such a reason. By analogy, the apparent fact that I ought to go to the cinema is not itself an apparent or subjective reason for going or does not provide such a reason.

(I’m aware this is not an uncontroversial view of reasons. The section of the paper where this issue crops up is, as I'm keen to stress, 'exploratory and tentative'.)

This looks like a good occasion to compare Daniel’s view with my view in ‘What Apparent Reasons Appear to Be’. I think my view avoids (indeed, it is designed to avoid) Errol’s objections and it is not a version of the view Daniel is attacking. But I will also suggest some replies on behalf of Daniel that would reduce the distance between our views. Given certain controversial views about normative belief, Daniel’s view might collapse into a close relative of my view. But my view would still have the advantage of not *requiring* us to accept the further controversial views about normative belief. (Of course, we might accept these views anyway, but there is still good reason to not build commitment to them into our account of subjective reasons.)

First of all, I will stress that a circularity worry is part of what motivated me to analyze rationality in terms of *competence*, where a competence/performance distinction is also drawn to allow for rationality without success or even objectively likely success. My view goes roughly as follows:

(CAT) P is a subjective reason for S to PHI iff (i) it appears to S that P, (ii) S is attracted to treating the apparent fact that P like an objective reason to PHI, and (iii) this attraction manifests S’s competence to treat P-like considerations (if true) like objective reasons to do PHI-like things iff they are such reasons.

CAT is not a view on which subjective reasons are analyzed in terms of rationality or anything straightforwardly analyzable in terms of rationality. Of course, one might have the substantive view that rational capacities reduce to objective reasons-sensitive competences. (This seems to be Raz’s view in From Normativity to Responsibility.) If so, this view would be extensionally equivalent to a view that analyzes subjective reasons in terms of rationality. But the objective reasons-sensitive competences are more fundamental. So the extensional equivalence does not yield circularity.

Daniel could make a similar move. Indeed, he is not far from doing so when he suggests (on p.16) that knowledge might be prior to rationality. I myself think that knowledge is prior to rationality, though I think it is analyzable anyway as apt belief, which is not essentially reasons-based. Knowledge is not a standing in the space of reasons, but rather a precondition for standing in that space. (Cf. “The Place of Reasons in Epistemology” by me and Ernie Sosa.) But in this context, a different but related view becomes salient: one might think that knowing that P is an objective reason to PHI amounts to an apt exercise of objective reasons-sensitive competence. If, as Daniel suggests, we held a lax account of normative belief on which believing that P is a normative reason amounts to treating P like a normative reason (or the like), this would be plausible. Then Daniel would avoid the circularity objection.

Nevertheless, my view would retain an important advantage. It would be nice to avoid an account of subjective reasons that *requires* us to adopt a lax account of normative belief and concept possession. To address other objections, Daniel is already suggesting that a lax account of normative belief is needed to defend his view. But we could adopt a view that doesn’t require this (though this view might be extensionally equivalent to Daniel’s if a lax account of normative belief is true).

Now, I think my view is better in other ways. I think Daniel’s view is too demanding and faces what I call the “Problem of Wouldn’t-Be Reasons” (this is related to the theme of Errol’s Vlad case). Knowability is factive. But there are more plausible non-factive conditions to use instead. If we want to keep the rest of Daniel’s view, we could say that P is a subjective reason for S to PHI iff S is in a position to have the a priori rational belief that if the facts of the situation are as they appear to S to be, those facts give S an objective reason to PHI. Or, to avoid circularity, we could say that P is a subjective reason for S to PHI iff S is in a position to believe while manifesting a priori competence that if the facts of the situation are as they appear to S to be, those facts give S an objective reason to PHI. This would help with Errol’s Vlad case.

Frankly, though, these a priori knowability cases are just hard to assess. If the fact that P is really knowable a priori for a subject, it is hard to see how the subject could be *justified* in believing that ~P. My instinct would be to say that this illustrates the need to divorce rationality and justification. We should keep rationality on the less demanding side. Daniel instead suggests that we distinguish between an attitude/act’s being rational and a person’s being excusable for having the attitudes she has. I am inclined to think there is no difference between these things. Rationality is already a kind of excusability. There is no justification/excuse distinction for rationality. (There is a rationality/blamelessness distinction, but excusability is more than blamelessness, as John Gardner has taught us.)

But maybe it is fine to describe things in both ways. Maybe the concept of rationality suffers from indeterminacy and there is no fact of the matter here.

Three more comments. Firstly, invoking a competent treating (or attraction-to-treat) condition rather than a normative belief condition will help with Errol’s Bob case. Even if Bob cannot get himself to form the rational belief that P is a reason to PHI, he can still treat the apparent fact that P like an objective reason to PHI. I think this illustrates not only that a treating condition is preferable to a normative belief condition, but also why normative belief is not easily understood in a lax way. I am sure Errol agrees, since he also appeals to a treating condition in his own account of subjective reasons.

Secondly, a competent treating condition (or the like) would allow us to steer between the horns of Clayton’s dilemma. I should, however, say that I am not sure why Clayton thinks anyone would hold that subjective reasons are motivating reasons. They may have an indirect connection to motivation, but they are less than motivating reasons.

Thirdly, I think both Daniel and I could avoid Clayton’s final worry by distinguishing between structural and substantive rationality. This is not, I would stress, the same distinction as the distinction between rationality and justification. Substantive rationality is more than coherence but less than justification. See my paper “Rationality and Justification: Reasons to Divorce?” for arguments (on my website).

Finally, apologies if anything I have said wrongly ignores what happened after the first four comments. They were posted while I was still writing. (It took a while to write this.)

Hi Daniel,

Thanks for your responses. I don't know why I can't get JESP to open. I thought it might be my browser, but I've tried Chrome and Safari and don't have any luck.

On motivating/subjective reasons. Fwiw, I didn't think subjective reasons were motivating reasons, but I thought there might be similarities nevertheless (e.g., in terms of the crucial psychological conditions that have to be met for something to be your reason or for something to be a subjective reason). The crucial question to my mind was whether a subjective reason was something that you were committed to or whether it was something that you could be neutral on. In the case of perception, it seems there's nothing that you're committed to apart from the perceptual belief itself. So, if there's a belief-requirement on subjective reasons, it's hard to see how they could do much work in explaining the rationality of perceptual belief. Anyway, the crucial question (to my mind) was just whether it follows from p being among your subjective reasons that you believe p.

On normative/non-normative facts: The restriction makes sense. I'm trying to work out the details of your view to see how it applies to things like the rationality beliefs about rationality, suspending judgment, suspending judgment on claims about rationality, etc. Unfortunately, I still cannot get JESP to open on my laptop so I think that will have to wait.

Thanks Jussi for some really thought-provoking comments.

Perhaps, following the invite above, you’ll say more about the ‘problematic consequences’ you have in mind. In the meantime, here are some thoughts.

Since it’s been the focus of the discussion so far, I want to start by taking the opportunity to say something brief about why I appeal to the notion of a priori knowability. A standard way of thinking about subjective reasons is that they are determined by (‘depend on’, to use Jussi’s phrase) the agent’s perspective. One of the main points of my paper is that we can’t understand this ‘determination’ as metaphysical. How then should be understood? It must be a kind of epistemic determination. But what is that? The answer I give, borrowing an idea from David Chalmers’ work on epistemic modality, it is that it is a matter of a priori entailment. So, they fundamental thought is that we need a notion of epistemic determination, and the appeal to a prioricity is offered as a way of cashing this out.

Back to Jussi’s post…

Jussi focuses on the fact that my account is (as I say in the conclusion of the paper) ‘doubly subjective’. Whether a person has a subjective reason to X depends both on how the facts appear to her to be *and* on whether she can reason a priori from those apparent facts to the conclusion that she has objective reason to X.

However… I don’t think it’s quite right to suggest that, on my account, the favouring- or reasons-relation in which subjective reasons stand to actions is less than objective (in the sense that that they depend, as Jussi puts it, on the agent’s or anyone else’s views about reasons). According to the analysis I propose, A has a subjective reason to X only if she can know a priori that, if the facts are as they appear, those facts give her an objective reason to X. Knowledge is, of course, factive. So, according to the analysis, A has a subjective reason to X only if, if the facts are as they appear, those facts give her an objective reason to X.

On this view, whether an (apparent) fact favours performing some action does not depend on whether an agent takes it do so, or whether she is in a position to know that it does, or anything of that sort. The idea is only that, whether that (apparent) fact is a subjective reason for the agent depends on whether she is in a position to know that it stands in the (objective) favouring-relation to the relevant action. To put the same point differently, subjective reasons could not favour anything that objective reasons would not favour.

That said, it is of course a feature of my view that whether an agent has a subjective reason for Xing, hence whether it is rational for her to X, depends on what she is in a position to know or, as Errol puts it, the powers of her normative reasoning. But that seems to me the right thing to say. Whether it is rational for an agent to do something depends (surely!) on the range of her rational capacities.

Turning to Jussi’s positive proposal, he suggests we might formulate an account of subjective reasons as follows

E* A subject has a subjective reason to X iff, in the closest epistemic possibility to the one in which the facts are as they appear to the subject to be, the facts give her an objective reason to X.

(NB: I’ve reformulated Jussi’s principle in a way that makes it easier for me to raise the concerns below. However, I think the reformulation is equivalent to the original.)

As it happens, this is an account which I have toyed with, and continue to toy with. I’m very much open to it. As mentioned above, I take the central claim of the paper to be that what you have subjective reason to do is determined by what is epistemically possible for you. E* might provide a way of capturing that thought.

However, I’m not sure what advantages E* has over my present proposal. For example, Errol’s example of Vlad the mathematician seems like a counterexample to E*. In that example, I take it, it is not epistemically possible that both the propositions Vlad believes are true. So, E* will come out as trivially true – I can explain this point in more detail if need be – and we’ll get what Errol calls the ‘explosion problem’.

A rather different (and more inchoate) worry about E* is as follows. How do we determine nearness or closeness among epistemic possibilities? Suppose it appears to Harold that Maude is in pain. Is an epistemic possibility in which Harold has an objective reason to relieve Maude’s pain (epistemically) closer for Harold – i.e. closer to the possibility in which things are as they appear to him to be – than any epistemic possibility in which Harold lacks an objective reason to do so? At present, I don’t quite know how to answer those questions and I wonder if answering them would lead to a ‘doubly subjective’ view of the sort Jussi wants to resist.

As I said, inchoate.

This has been one of the more involved responses. I’m happy to try to spell things out or expand on these points if people want to pursue them further.

(I'll respond to Kurt and Clayton as soon as I get the chance...)

Clayton – thanks for clarifying the issue.

The account I propose is designed to allow that whether an agent has a subjective reason to X depends, not only on what she believes, but on how things appear to her to be, where how things appear might be determined in part by her perceptual experiences (see pp. 8-9). So, on this view, what an agent sees (or seems to see) might provide a subjective reason (hence, make it rational) for her to X.

A worry one might have about this – your worry, I think – is how a consideration could give an agent a reason to X when she is not, to use your phrase, committed to it. For example, how could (seeming to) see that someone is pain give you a subjective reason to help her, hence, make it rational for you to do so, when you have not yet taken a stand on whether she is in pain (whether things are as they seem).

Perhaps the point to stress here is that, in the perceptual case, the consideration which provides the reason is not a *mere* consideration – not just something the agent is considering or entertaining. Rather, it is the content of a perceptual state or episode. And, like belief and unlike desire or imagination, perception is a *presentational* attitude (to use Mark Schroeder’s helpful term, from the paper cited above). A presentational attitude presents its content as being true, as a fact, or as obtaining.

To return to the example, whether or not you (now) believe it, your perceptual experience presents you with the consideration that someone is in pain as obtaining or as a fact. So understood, it is not hard to understand how what you (seem to) see could provide you with a (subjective) reason for helping her, even if you have yet to respond to what you see by forming the relevant belief.

Of course, there’s more to be said about what makes an attitude or episode ‘presentational’. But that’s a more general issue (in philosophy of mind, broadly-construed).

(Kurt – bear with me…)

I wonder how much the Vlad example generalizes. I'm inclined to think, for example, that standard cases of inconsistency in scientific theorizing or mathematics involve rational beliefs. Likewise, we might think that given the pervasiveness of ordinary inconsistent beliefs, E yields either overspill---we have too many subjective reasons because we can have rational, yet inconsistent beliefs---or underspill---We don't have subjective reasons because we don't very often have rational beliefs

Kind of a general problem, of course, but still.

Thanks to Kurt for such a detailed post. He raises a lot of points and I won’t address them all in these remarks. Do let me know if you think I’ve overlooked something significant.

I’ll set aside the issues which I’ve already discussed – to everyone’s satisfaction, I’m sure! – such as the circularity concern, the requirement that the agent possess the concept of a reason, and the Bob and Vlad cases. Instead, I’ll focus on an issue we’ve yet to explore.

Kurt has offered his own account of subjective reasons, one which strikes me as very interesting indeed. As he notes, the difference between his proposal and my own might not be a large one. While I have my concerns about his account, I don’t think it would be fair to Kurt to use this as an opportunity to criticise that account or the arguments he offers in support of it (though, if encouraged, I could say a few things). Instead, I’ll focus on the difficulties he thinks my account faces.

Kurt’s main worry is that it suffers from the Problem of Wouldn’t-Be Reasons. What is that problem? His paper provides an illustration (§3.2):

“Suppose that it appears to some scientists that it is a law of nature that Fs are Gs, but this appearance is misleading. Every time an F appeared to be a G involved an illusion; in reality, Fs are nomically guaranteed not to be Gs. Now suppose that the scientists have been correctly told that X is an F, but nothing more. The fact that X is an F is an apparent reason for the scientists to believe that X is a G. But the fact that X is an F is not an objectively good reason to believe that X is a G. Objectively speaking, the fact that X is an F is a conclusive reason to believe that X is not a G.”

The example concerns rational belief – which I’d hoped to set aside! – but for present purposes it does the job.

The case might provide a counterexample to certain theories of subjective reasons, in particular, counterfactual analyses along the lines of (C) (see Errol’s original post). But I don’t think it works against my account. To save you scrolling up again, here it is again:

(E) A subject has a subjective reason to X if and only if it is a priori that, if the facts of the situation are as they appear to her to be, those facts give her an objective reason to X.

In Kurt’s example, it appears to the scientists *both* that it is a law of nature that Fs are Gs, and that X is an F. If things are as they appear, it follows straightforwardly that X is a G. Making some fairly innocent assumptions about reasons for belief, it seems to follow as straightforwardly that, if things are as they appear, the scientists have an objective reason to believe that X is a G. So, given (E), they have a subjective reason to believe that X is a G. Hence, it is rational for them to believe that X is a G. That seems the right result.

Of course, depending on your view about the laws of nature, it might not be metaphysically possible for things to be as they appear to the scientists to be. Perhaps in no metaphysically possible world is it the case that all Fs are Gs. But that’s not a problem for my account since, as discussed previously, it deals with epistemic possibilities.

So, if I’ve understood the Problem of Wouldn’t-Be Reasons, I don’t think (E) faces it.

So far so negative. I should stress that I appreciate Kurt’s efforts to bring my own account into contact with the one which he advances, and to offer responses on my behalf to some of the other issues.

(I hope to say something soon about Jack Woods’ latest comment. Unfortunately, I’ll be out the office for the next couple of days so there might be a delay before I post again.)

Thanks, Daniel. I wasn't worried about that case in particular -- though it is (fair enough) a case I used to illustrate the problem in the paper.

I was just worried about whether the appeal to a priori *knowability* rather than something non-factive would lead to counterexamples that similarly illustrate that the account imposes a necessary condition that is unnecessary. Can't we imagine cases in which a subject is in a position to form the a priori rational but false belief that if the facts of the situation are as they appear to be, those facts give an objective reason to X? Here the indicative conditional is not knowable a priori because it is false. Maybe you think there are no such cases. But if there are such cases, I think it will be intuitive that X-ing is rational in them.

There are various ways you could respond. Maybe you think this is a case where X-ing would merely be blameless. But then I'd wonder about how you'd respond to my other worries about that move.

Perhaps it is wrong to treat these cases as illustrating the *same* problem as the cases I employed when discussing the Problem of Wouldn't-Be Reasons. Nonetheless, I think there's a deeper problem that is similar: the account imposes a necessary condition that it shouldn't impose.

Hi Daniel,

I think your view and the one I defend in ‘Subjective Reasons’ (Ethical Theory and Moral Practice, 2012) are quite similar, and I wonder if they might be more similar than you think. My view is, essentially, that there is a subjective reason for one to X iff it is entailed by (the content of) one’s perspective on the facts that there is an objective reason for one to X (although I don’t use the term ‘perspective’), with a restriction to epistemically ‘accessible’ reasons.

The similarities: (1) We both reject counterfactual/subjunctive conditional analyses, and for similar reasons; (2) we both adopt accounts based on indicative conditionals; and (3) we both have a condition of epistemic accessibility.

The differences: (1) On the view I defend in the paper, one’s perspective includes only one’s credences and beliefs, although I now agree that including other sorts of attitudes (e.g. seemings, perceptions, memories) is probably warranted. (2) You have two restrictions that I don’t: (a) irrational beliefs don’t generate subjective reasons, and (b) beliefs about reasons don’t generate subjective reasons.

The core of our accounts, however, is where I wish to focus. In particular, I wonder how much daylight there is between my use of entailment plus epistemic accessibility and your use of a priori knowability. You mention in your paper (note 27) that your account has the advantage of having epistemic accessibility built in, while my restriction to accessible reasons appears ad hoc, but I must admit I find this claim hard to assess. Is the advantage simply one of theoretical simplicity/elegance? Might our accounts be extensionally equivalent, and if so, how much of an advantage would simplicity/elegance be (especially given that you don’t have reductive ambitions)?

Hi Eric,

Thanks for this really interesting post. As you say, I had taken our accounts to be very different. Indeed, I took your account to be one my main targets! In what follows, I’ll try to explain why.

Although your final proposal (in ‘Subjective Reasons’) is more nuanced than this, it’s along the following lines:

(1) There is a subjective reason for A to X iff A’s beliefs entail that there is an objective reason for A to X.

In the paper (pp. 244-245), you explain that we should understand this as equivalent to:

(2) There is a subjective reason for A to X iff, necessarily, were A’s beliefs true, there would be an objective reason for A to X.

To put it another way, an agent has a subjective reason to X just in case what she believes strictly implies that she has an objective reason to X.

Since you (p. 242) and I both assume a Lewis-style semantics for counterfactuals, I took your proposal to be equivalent to:

(3) There is a subjective reason for A to X iff in all the metaphysically possible worlds in which A’s beliefs are true there is an objective reason for A to X.

Fn12 of your paper seems to confirm this. And, as far as I can tell, there is no suggestion elsewhere that we might understand the possibilities involved as epistemic.

In light of the above, I took you to be advancing a counterfactual analysis of subjective reasons, or an analysis which is equivalent to a counterfactual analysis. For that reason, I took your account to be vulnerable to counterexamples like those involving Mary which Errol outlines above (that is, cases in which it is not metaphysically possible for the facts to be as they appear to the subject to be). By the same token, I took my account to differ from yours, as it is precisely designed to be an alternative to counterfactual analyses.

If you did not intend your analysis (1) to be understood as equivalent to (3), then perhaps it can be understood in a way that closes what I took to be the gap between our proposals. That, of course, would be a welcome result as far as I am concerned!

On your final point -
As you say above, in ‘Subjective Reasons’ you float the idea that there is an ‘accessibility constraint’ on subjective reasons (p. 252). Very roughly, the idea is that the agent must have ‘epistemic access’ to the proposition which provides the relevant reason. I think my concern was that this looks like something which gets added so as to avoid certain problematic cases but which lacked independent motivation. (This seemed especially true since I understood your account of the relationship between what a subject believes and what reasons she has to be one of metaphysical, not epistemic, determination.) In contrast, on my account, the constraint just falls out of the way in which I cash out the idea that one’s perspective determines that one has a reason to act, namely, in terms of a priori knowability. That’s the (alleged) advantage.

Once again, thanks for your comments. I’d be pleased to find out that our proposals are not in competition.

(Jack - I've not forgotten you!)

Daniel,

Thanks very much for the reply.

I didn’t take my account to be a counterfactual analysis; indeed, none of (1) - (3) strike me as counterfactuals. In my paper, I discuss subjunctive conditionals rather than counterfactuals (the latter category, I take it, is a subset of the former), and I argue that the usual accounts of subjective reasons that invoke subjunctive conditionals are problematic unless they invoke *necessary* subjunctive conditionals, which are equivalent to strict conditionals — the latter being the formulation I adopt. (Maybe strict conditionals are equivalent to counterfactuals (at least when the antecedent is contrary to fact) because they are equivalent to necessary subjunctive conditionals – I just didn’t take them that way.)

In any case, I think you’re right that Mary’s case is a problem for accounts (like mine) that invoke strict conditionals. (That Mary’s case in your paper was explicitly directed at counterfactual accounts led me, I think, to dismiss it as not being a problem for my own view.) I will have to consider more thoroughly whether a view like mine has the resources to deal with Mary’s case; I admit, it seems a challenge.

Hi Jack,

Thanks for pressing this issue.

As Errol stresses in his summary above, the problem case for me would be one in which it is the case *both* that it is rational for the agent to believe that p and for her to believe that q *and* that she is in a position to know a priori that those beliefs are inconsistent. For what it’s worth, I’m not yet persuaded that there are cases in which both conditions are met.

But rather than repeat or insist on this point, I’ll offer a different response. (For ease of presentation, I’ll assume that an agent’s perspective is determined (only) by what she believes. Nothing turns on this.)

When determining whether it is rational for an agent to perform some act, we can evaluate that act relative to *all* of her beliefs or relative to *some* relevant subset of her beliefs. The (so to speak) global evaluation is no doubt legitimate but, I think, we are typically more interested in the local evaluation.

Suppose, for example, that an agent decides to take an umbrella when leaving the house. When determining whether that is a rational (smart, sensible, etc.) thing for her to do, we might consider the action in light of her beliefs about, say, the weather but would probably not consider her beliefs about the upcoming general election. With our attention restricted in this way, we might decide that, given the agents’ meteorological beliefs, taking the umbrella is rational, irrespective of her inconsistent political opinions.

Though I don’t make much of this point in the paper, the account of subjective reasons is supposed to be ‘localised’ in this fashion. Whether an agent has a subjective reason to do something, hence, whether it is rational for her to do so, depends on her beliefs *about the situation*, that is, on those of her beliefs which are relevant to the question of whether to perform the relevant act.

To return to Vlad the mathematician – Suppose we grant that Vlad has rational but inconsistent beliefs, and that he is in a position to know this a priori. Still, those beliefs will presumably not be relevant to determining whether (e.g.) it is rational for Vlad to take an umbrella when leaving the house, hence, whether he has a subjective reason for doing so. Those beliefs do not bear on his practical situation.

Perhaps this speaks to the worry about ‘overspill’.

Thanks for the response. It's mildly controversial, but presumably familiar that many many many cases of ordinary belief and plenty of cases of theoretical belief are plausibly cases where we know our beliefs are inconsistent, but the costs of ironing them out are outweighed by the difficulty of doing so. Maxwell's equations and the Bohr model of the atom is a famous case of inconsistent theoretical belief and Harman's Change in View is chockfull of cases of ordinary inconsistent belief where we meet all the conditions you stress. Plausibly. I won't fuss about this too much, since you can claim either that (a) we don't actually believe both, or (b) our beliefs are irrational. I won't argue for this here, but I think this tack massively implausible.

Responding to your other point is more interesting. All I want to ask here is how you think we bound off the beliefs. Surely it's bad enough if Vlad has subjective reason to come to believe that 1+1 = 3 or burn his current manuscript. I'm not sure how the umbrella case solves this worry completely...for any situation where you've brought in enough beliefs to get a sensible account of the agent's perspective on it, you'll generate subjective reasons which they intuitively do not have if they have rational, yet inconsistent beliefs about the situation. Which seems quite plausible. Or, anyways, to me and many others.

Thanks again for the reply! It's interesting and I think the bracketing-style response lessens if not eliminates the problem.

Thanks Jack.

I’m not sure it’s so implausible if one keeps in mind that there are other terms of evaluation we can apply in the cases you have in mind (e.g. blamelessness), and that we are dealing here with outright belief in a proposition, as opposed to, say, a degree of credence towards that proposition (or some other cognitive relation to it, such as reliance). But, as I doubt we’ll resolve that issue here, I’ll set it aside and point to a different response…

On the view I propose, an agent has a subjective reason to X iff it is *knowable* a priori that, if things are as they appear, she has an objective reason to X.

In dealing with cases like Vlad the mathematician, it might help to think more about this ability to know.

Compare: Mo Farah is able to run 10K. But he is not able to run 10K in less than 10 minutes, or in the absence of oxygen, etc. It is plausible that, when we say that a subject has a certain ability, there is some implicit/implied/implicated relativisation to certain circumstances. (This, I take it, is a familiar point from the literature.)

Likewise, a person might be able to know a priori that p (that is, she can know a priori that p). But she might not be able to know this, a priori or otherwise, in an instant, or without a period of uninterrupted reflection, etc. So, it is equally plausible that, when we say that a subject can or is able to know a priori that p, there is some implicit/implied/implicated relativisation to certain circumstances.

In view of this, one might formulate the view making this relativisation explicit in something like the following way: an agent has a subjective reason to X iff she can know a priori *in such-and-such circumstances* that, if things are as they appear, she has an objective reason to X.

Returning to Vlad the mathematician, if we grant that his inconsistent beliefs are rational, we might still deny that they give him a subjective reason to do anything and everything on the grounds that, though he can know a priori that his beliefs are inconsistent in some circumstance or other, he cannot know this in the relevant circumstances (or under the relevant conditions).

Needless to say, I need to say something positive about what the circumstances are relative to which the relevant proposition is knowable a priori. It’s going to be more than ‘after a moment of casual reflection’ and less than ‘after a month of uninterrupted contemplation’. (I take it that, if Vlad could see with a cursory glance that his beliefs are inconsistent, there would be very little inclination to insist that his beliefs are rational.)

I’m afraid I don’t have much more to say about this issue right now. I need to work out the details here (a month of uninterrupted reflection would be nice!).

While awaiting the details, it might still be worth stressing a couple of points:
First, to appeal to the relativity of abilities is not an ad hoc response to Vlad-style cases. There is independent reason to think that talk of abilities (competences, capacities, powers, and the like) is relativised in this way. So, these are details which need in any case to be worked out.
Second, there might be some degree of vagueness or context-sensitivity here. But that would be okay, as I take it that judgements of rationality are correspondingly vague or context-sensitivity.

In summary, the thought is as follows. *If* we grant that subjects can meet Errol’s conditions (i) and (ii), getting clear about the relativisation involved, along with the bracketing-style response, might help with ‘overspill’ or ‘explosions’.

I agree that it isn't ad hoc. It is, in fact, the natural path to take in response to Errol's style of case...and the relativization of not just the information to the circumstance, but also the ability modal helps as well.

I just wonder if it's enough to avoid the problem cases. I was thinking that the really troublesome cases are those where we know in advance that a certain set of our beliefs is inconsistent---which I think is frequently the case. In a situation where those beliefs become relevant and we antecedently know why, it wouldn't take much to know a priori they are...unless you want to subtract such knowledge from what we bring into those circumstances. But that seems strange and it would require not importing lots of other information in non-troublesome cases. Anyways, I'll await the details.

Note also that in order to get Errol-style cases up and running, we don't need to posit an explicit inconsistency we can discover. It's enough to know apriori that there will be an inconsistency somewhere down the pike---it doesn't matter which. An adapted case from Tom Kelly makes the point nicely. If we can explicitly outright believe at least one of our relevant-in-a-circumstance-outright-believed beliefs is false, then we know things cannot be as they appear. Effectively, this is to point out that we can strengthen Errol's point by weakening his (ii) from " know a priori that this inconsistency exists," to " (ii) know a priori that an inconsistency exists".

I look forward to hearing more!

Hi Jack,

Thanks (again!).

I’m pleased to hear that you think that the relativisation route is the one to take. And your second point is well taken.

I’m afraid I don’t have much (any?) more to say in response to the concern you are pressing. Suppose that I know in advance that my political opinions are inconsistent (though I don’t know where the inconsistency lies). My ‘intuition’ is that, in that case, it is not rational for me to fully, flat-out believe the relevant propositions, though it might be rational to have a high degree of credence towards each of them, or to rely on them for the purposes of voting, or to think that each proposition is probable, etc. The ‘intuition’ does not go away when we make the case one in which I do not know this but am in a position to know it a priori (in such-and-such circumstances) – though, depending on how the details get filled in, I might be blameless in continuing to outright believe each proposition.

Of course, these are (so-called) intuitions, not arguments. How to make progress? Well, returning to a point which came up earlier in the discussion, what would help is to have a worked-out account of rational belief to accompany the account of rational action. That is work-in-progress.

I agree with all of this, modulo the difference in intuitions. Quite a bit is going to turn on being able to give an account of rational belief which is sufficient to generate enough subjective reasons while simultaneously screening off cases like Vlad and the above ones I mentioned. But, as you say, that's for future work. Thanks for the fun exchange.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


Disclaimer

  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.