Some Of Our Books


« CFP: A Networking and Mentoring Workshop for Graduate Student Women in Philosophy | Main | C & D Month (First Names!) on the Soup »

January 31, 2014


Feed You can follow this conversation by subscribing to the comment feed for this post.

I think the real worry that motivates Cohen, which I'm not sure you fully answer (then again, blog post...) shows up in an ugly dilemma: either a) whenever we don't do what morality, in a full-agency world, demands, we fail to of so just because we don't have the agentic capacity to do so, or b) sometimes when we don't do what FAmorality demands, we have the agentic capacity to do so, but screw it up anyway.

In world a), we've just abolished moral criticism, and perhaps reduced all of normative philosophy to soulcraft to the end of building stronger agency. In world b), we have an even harder problem: how do we tell the lack of agency cases from the lack of (whatever else it is that is necessary to act morally) cases? We'd need to do so in order to properly carry out practices of praise and blame, as well as coercion and punishment, but it's hard to imagine even what kinds of evidence would count. (Assuming, of course, that the set of insufficient agency actors is broader than those with diagnosable psychological disorders.)

I like what Paul says. Two more things.

1. Doesn't Cohen reject the claim that X being required by justice implies we are psychologically capable of bringing about X? One of his concerns is to make judgments about the goodness or badness of human nature, from the standpoint of justice. I guess this is a concern about whether you've granted him all his premises in the post.

2. It seems at least worth considering that the limits on agents' capacity for altruism are socially responsive. Maybe the wizard imposes some limits on us, but he does so in ways that manifest differently depending on what our social circumstances look like. So we might still have an argument to change those circumstances so as to get more people closer to being "Singer types," even if wizard-impose limits on our capacities remain.

Hm, I don’t find any of your examples compelling, frankly – the very first one about the wizard implanting a particular strong revulsion to bathroom-cleaning would have to be spelled out more, and for all the others I’m just inclined to say that the brainwashing victim has no excuse at all.

Also, maybe a technicality:
The fact that someone can’t bring himself to φ does not imply that he can’t φ. (The converse does seem to hold, though.)

In any case, "ought implies can" is just a dogma. Don’t let it drive the rest of your view.

Jamie is right to point to the fact that being unable to bring oneself to do X does not imply being unable to do X. Harry Frankfurt's cases of "volitional necessity", in fact, illustrate how such phenomena might enhance one's autonomy, as when Luther said, "Here I stand, I can do no other." The type of necessity at work is neither causal nor logical, but volitional, and it's a function of some cares the agent has involuntarily come to have. So were the wizard or evolution to create us in such ways that we are volitionally necessitated to refrain from doing things that would betray our cares -- i.e., if evolution designed us just the way we are -- then I can't see how we wouldn't still be the appropriate subjects of praise or blame, say.

What distinguishes these cases from compulsion cases is that the source of our "inability" in volitional necessity cases is ourselves, i.e., the attitudes flow directly from things that matter to us, whereas (except in willing addiction cases) that's not true for compulsives.

A literature point, in case you're interested: Al Mele and Derk Pereboom construct detailed cases like the ones you're after in various of their books/articles.

If the claim boils down to "evolution makes use selfish so we are permitted to be selfish" you've committed the Naturalistic Fallacy.

In addition, the entire argument from unwilling can be undone by arguing that norms in favor of agency demand taking responsibility for one's will by shaping it in favor of morally good acts of willing. If there should be morally good acts, there must be moral agents who strive to be morally good.

While it seems true that once one becomes sensitive to evolutionary and psychological factors the line between 'can't' and 'won't' begins to blur, and the postulation of 'oughts' consequently become problematic, this seems actually to be the existential mess of our ethical reality, where the muddle of experience is in tension with the regulative ideals of goodness, justice, and excellence. The psychological truths you describe where some people or some developmental stages or sometimes simply moments of our lives possess less degrees of freedom than others do not in my view pose a problem for morality but articulate one of its basic elements, i.e. the demand to strive for more self-consciousness, sensitivity, and autonomy. This is our challenge in raising children, our challenge in having a fair legal system, and most importantly in working out the dialectics of our own lives. A lot hangs on the meaning of 'can't' and 'unwilling'. I do see a danger in collapsing the difference. Difficulties and obstacles are part of moral value. Once we blur the line into a 'can't' , moral struggle disappears. Life may be tragic, it may be struggle but, if keeping a promise is an 'ought' it remains so even if one's deepest will resists it. unwilling isnt the same as cant.

I take it the logic of your argument is supposed to be something like:

1) we have the intuition in your 3rd case (where the wizard seeds the DNA of primitive humans) that someone's implanted-by-wizards unwillingness to phi defeats the claim that she has a moral duty to phi.

2) there's no relevant difference between this case and the real world case.

3) so, in the real world, someone's unwillingness to phi defeats the claim that she has a moral duty to phi.

Why not, as many have done in response to Pereboom's 4-case manipulation argument, just run your argument backwards? In the real world, we have the intuition that unwillingness to phi *doesn't* defeat the claim that one has a moral duty to phi, and, since there's no relevant difference between the real world and the wizard world (your third case), implanted-by-wizards unwillingness to phi doesn't defeat the claim that one morally should phi morally should phi.

I don't think Jamie's point is a mere "technicality: The fact that someone can’t bring himself to φ does not imply that he can’t φ." (I press that point and its relevance for political philosophy here. In light of that, we'd need some argument for letting "can't bring himself to φ" block "he ought to φ" since "ought implies can" does not get engaged. Much of the literature on these questions is about freedom, responsibility or blame, but the immediate question is whether he can and so ought to φ.

I lean toward the view that he can do it so long as there is an opportunity and were he to try to do it (as opposed to trying to bring himself to do it) he would (tend to) succeed. (This is a sketch of the general idea, employed by a number of authors. Refinements might well be needed to cover certain deviant cases, etc.) Motivations (installed or otherwise) not to try something are not inabilities to do that thing, since they put nothing in the way of the agent's succeeding should he try. (Some might want to say it is an inability to try. But there are dangers in treating trying as an action in its own right in that way. And while some say this scenario counts against freedom or responsibility, I take no stand on that here.) So there may yet be lots of requirements on us that we cannot bring ourselves to meet, though we could meet them. I don't know whether this favors socialism, but it would count against a very common way of arguing against relatively idealistic political theories, as if "ought" implied "can bring himself to." It makes that look indefensibly conservative.

But note that Cohen also holds that this is true: If you are committed to "you ought to do A", then you are also committed to "You ought to do A if it's possible to do A". I reckon there is one interpretation according to which this fact-insensitivity thesis is true, but then it can be shown to rest on an unappealing form of ethical intuitionism. Or at least that's what I argue here:

Sorry for the condensed comment; I'm in a frightful rush now, but will try and come back later.

Hi everyone,

Thanks for all the comments. As I suspected, there's a vast literature on this topic I haven't read.

I'll post more later, but start with Paul, since he was the first to respond:

I don't want to abolish moral responsibility altogether. But instead of treating moral responsibility like an on-off switch, treat it like a continuum. There are things I cannot will myself to do--I don't have the psychological capacity to do so, period, and never will, not matter what I do. There are things that I cannot now will myself to do, but I could will myself to undergo moral training such that I acquire the will to do so. And then there are things I can will myself to do right now.

Take, say, the Uruk Hai from LoTR. Why are they so cruel? Let's say (I'm not sure how accurate this is) that just how they were made. They were designed to be evil. Sure, there's a sense in which a Uruk Hai could be altruistic, if only its mind were different from how it in fact is, but that's the same as saying a Uruk Hai could lift 10,000 pounds, if only its body were different from how it in fact is. One might say, like the view Estlund leans toward, that the Uruk Hai could do it if there were an opportunity and if were to try to do it. But what I asking here is whether there is a class of things, which might vary from person to person, that they are psychologically unable to try to do.

Imagine a cartoony picture of human action in which right before you phi, the motive to phi has to get into in your motive box and stay there for 1/10th of a second. Otherwise, you don't phi. Now, picture the motive box as broken. The motive box can hold motives to psi and gamma, but it has big phi shaped holes at the bottom, such that the motive to phi just slips out. And suppose the agent can't fix his motive box in any interesting sense.

Maybe brain damage--a stake through the head--might break the "motive box", so to speak. Maybe massive abuse suffered as a child might break it. Or maybe being cursed with bad DNA might break it. There's a worry here that once we start down this route, we can't blame anyone for anything. But what if it's that we can praise and blame people for many things, but not all things? (The practice of praise and blame becomes much harder, of course, because we often won't know if we're doing it properly.)

I may be missing the main thrust of the argument. Phenomenologically speaking it seems perhaps a truism that all freedom is conditioned freedom, all choice conditioned and limited more or less by history, finite knowledge, perspective, the body, situations and so on , and therefor all moral judgment of human action in striving for accuracy and fairness need to take such into account. I am not sure what there is to argue about in this regard. The problems and complexity of the practice of praise and blame shouldn't be a surprise given that it seems inevitable. The whole mess however is predicated on two things: on the one hand that there is a significant manner in which it makes sense to speak of things being good and bad in themselves [recall the history of debates on natural evil] and on the other on the assertion of moral evils as dependent on the freedom of possibility to do something [the can]. The former [like your example of the urikhai, or earthquakes, storms, diseases, tigers or whatever] are only morally evil to the extent that we connect them with a god, devil, or human who freely caused them. The latter holds that we are responsible and culpable even for the feelings, motives and acts that may condition our inability to carry out what is moral. So, to live morally is to demand this of ourselves, even while to judge morality [in others] is to take into consideration just these conditions as ameliorating circumstances. To confuse these perspectives [first and second] may lead one to throw out the proverbial baby with the bathwater.

J, you say, "what I [am] asking here is whether there is a class of things, which might vary from person to person, that they are psychologically unable to try to do." Are you prepared to treat trying as an action, so that questions can be asked about abilities to perform it? There's a regress there, since then the trying must be tried, and so on. If it's not like that, but rather just necessary, for some reason, that they will not try (which is different) that's just the kind of thing determinism does generally. I think my good upbringing necessitates that I won't steal your car (so long as conditions aren't too dire). That hardly bears on whether I am able to. It might be easy to steal your car. I just (necessarily, given the past) don't want to.


Yes, that's very worrisome--definitely want to avoid a regress here.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.