Some Of Our Books


« CFA: 9th Annual Metaethics Workshop | Main | BGSU Manipulation Conference »

February 02, 2012


Feed You can follow this conversation by subscribing to the comment feed for this post.

Before considering the special-case of an 'ethical' autocorrect, why not consider more general cases?

Would you use it for driving, where it autocorrects possible accident choices? (This already exists to a substantial degree.)

Would you use it for enforcing a diet which you are otherwise too akratic to go through? Or to exercise daily? (Both of these are theoretically doable as a kind of wireheading/direct stimulation of the pleasure center.)

Would you use a far more limited commitment device, like StickK where it's just money at stake?

If you wouldn't say yes to these, there's no point in discussing an ethical auto-correct.

Who gets to decide what is ethical?

I got so sick of the auto-correct on my iPhone "correcting" all my texts and emails to be x-rated, I switched to an Android phone. It's not much better. I sure hope ethical auto-correct doesn't have the same kinds of problems. . . . I'm not so sure I can put aside this worry and play along.

This is perhaps getting a little too tech-oriented, but an "ethical auto-correct" would suffer from conflicting "entries" for numerous situations. Various schools of thought would insist on different courses of action regarding the trolley dilemma and (depending on the people involved), would result in different outcomes.

This could be resolved by adopting another real-world auto-correct feature: "Languages". If you choose the "Utilitarian" school of thought (like choosing English, or Spanish), all actions will be judged by Utilitarian ethics and adjusted accordingly. If something you do is totally unintelligible, a little red line would underscore that entire period of your life, I suppose.

But I digress. It could make coherent sense, but I doubt many people would take advantage of it. The reality as I've observed it suggests that people prefer to live by numerous sets of ethical schools, and switch among them depending on which is convenient. Ultimately, convenience (and autonomy) are the most important features to rational adults, and an auto-correct feature would deny users either of those.

Personally, my ethics upbringing has been inconsistent with a single ethical school of thought. It takes from numerous schools of thought depending on the situation, attempting to account for pragmatic issues like logistics and whether the parties involved know or care. As a result, a consistent ethics auto-correct feature would be unappealing to me as well. It would force me to live a less convenient (and less autonomous) life, both of which matter to me a great deal.

Adding ethical principles to machines has been explored in the sub-field of Machine Ethics. There's an interesting collection on this topic at the link below.

A more defensible notion is the idea of an ethical adviser. It would be like spell check without auto-correct turned on. With auto-correct turned on the violation of human autonomy seems to undercut part of what is required to be an ethical agent: the possibility of choosing an unethical action yet deciding to choose differently.

Eric, I was about to tell you about Robert Howell's "Google Morals" paper, but then I remembered that you were present when he gave the paper (so I know no more about it than you do). Would you like to comment on what you think is different about Ethical Autocorrect?

I meant to add: Google Morals is like Christopher's ethical adviser. It's an iPhone app, or something like that.

Jamie: What's different is that Google Morals gives you the correct answer about what to do. What you actually do after that is still up to you. Ethical AutoCorrect doesn't give you the correct answer about what to do, but alters what you do (if that's coherent), in much the same way that AutoCorrect alters what you write: on the fly. After you do it, of course, you can see how your decision was AutoCorrected.

You do choose whether to have Ethical AutoCorrect on or off, and you can change whether it is. So, we are to imagine that Ethical AutoCorrect does not AutoCorrect your decision about whether to continue using it!

But I suspect that some people might think that using Google Morals is ok, because with each decision you choose whether to accept its specific advice, while not thinking that using Ethical AutoCorrect is ok, because it changes what you are doing without your decision-to-decision endorsement of the details of its wisdom.


I take it that, contra somewhatboxes, we're supposed to assume that the ethical autocorrect (EAC) is "pre-programmed" with the correct moral theory. With that in mind...

The EAC could be either of two things:

EAC(1): A mechanism that turns immoral actions into moral ones.

EAC(2): A mechanism that corrects for the disvaluable effects of one's actions.

EAC(2) seems perfectly coherent. But I take it that EAC(1) is what we're interested in. I'm in the camp that thinks that an action can only be moral if it is arrived at through deliberation of the right kind. So I think EAC(1) will turn out to be incoherent.

I do think that if EAC(2) existed it would be obligatory to use it.

"After you do it, of course, you can see how your decision was AutoCorrected." If you catch how autocorrect changed your action. This is a common problem with autocorrect: what it does often goes unnoticed because it changes on the fly. Assuming, as you do, autocorrect is perfect you might have a (perfect?) moral obligation to not override what it has perfectly done. This counts in favor of not trying to figure out what actions autocorrect changed from immoral to moral. This means the ultimate choice where you can exercise moral agency is the choice whether or not to turn autocorrect on. After that, you may have an obligation not to monkey with its changes, even though it is possible for you to do so.


That's an important distinction. I was thinking of something more like EAC(1). I want to understand better why you (and others) think that using EAC(1) would not yield action arrived at through deliberation of the right kind.

Because I aim to do only what's morally right, I turn on EAC. I am unsure whether to throw the trolley switch. I do my best, but I make the incorrect decision. EAC fixes that for me. So my action turns out right, and it does so because I aim to do only what's right. Isn't this good enough?

On the first question, I think the coherence of the idea might be challenged if we think that in some cases the right action must involve autonomy in a way that’s inconsistent with the use of Autocorrect. For instance, I think that I ought to issue sincere apologies for past wrongs without being prompted to do so or assisted in doing so. Autocorrect can’t make me do that. So, there’s at least one case where Autocorrect can’t make me do the right thing. (But my ethical view here is controversial, I admit. But maybe something along these lines might pose a problem. I need to think about this more.)

The second question – whether you should use it – is easier. Just turn it on. If you leave it on, the answer is yes. If you turn it off, the answer is no.

Hi Eric,

an interesting question. Just a few incoherent thoughts. It seems to me that knowing that an Autocorrect Device is the Ethical Autocorrect Device already requires knowing what the right ethical views are. Imagine that you were offered a range of devices each of which corrected you to do different actions. Would there be a point in picking any one of them? Well, there would be, if you knew that the specific device made you do *the right actions*. But, picking that device out of the many requires already knowing which of the actions the devices make you do are the right ones. (This feels like the problem of picking the ethical examplar in virtue theories or personal gurus in the self-help discussions.)

So, would there be a point of using the Ethical Autocorrect if you already knew that that specific device makes you do the right actions you knew to be right?

Assuming internalism, you might think that you are already moved to do the actions that you think are right. But, you might think that you'll be weak-willed in the future on some occasions and that the Ethical Autocorrect is a good way to make you do the right actions even then when you really don't want to do the right actions but rather something else. So, it might be of some use to safeguard oneself in those cases.

But, here it seems that the Kantian considerations kick in. The actions you will do because of the machine when are weak-willed do not seem to have much moral worth (or meaning as Scanlon would say). True - you are doing the right actions and that's somewhat desirable generally speaking especially from the patients' perspective, but in these cases you won't have the right kind of free will for morally admirable action.

What if the mechanism by which ethical autocorrect (EA) worked was to make sure that your deliberations lead you to the right action. Let's say you are trying to decide whether you should apologize for a lie you told to a friend. In your deliberation, your self-interested reasons are being outweighed by the moral ones, and you are leaning toward avoiding the shame and anguish of apologizing. Then EA tweaks a few neurons and, automagically, your deliberations lead you in the right direction. Because of EA, you now see that the moral reasons in favor of apologizing outweigh the self-interested reasons. You do the right thing, for the right reasons, and you come to it my deliberating about where the strength of the relevant reasons lies. You just needed a little help in your deliberation. Interestingly, I don't see why EA's help in this case is substantially different from a standard case of advice. If you were to ask a friend about what you should do about the lie, she might emphasize the importance of acting morally rather than self-interestedly. In both cases, you just needed a little nudge in the right direction.

Sorry...the third sentence in my comment should read: "In your deliberation, your self-interested reasons are *outweighing* the moral ones, and you are leaning toward avoiding the shame and anguish of apologizing."

Auto-correct is supposed to correct typos. Typos are typing slips. Some slips are moral.

If I could use a machine that would correct my moral slips, I would definitely use it. Deciding not to use it, in my opinion, could even be an instance of recklessness.

In fact, many of us might already be using EACs. Thanks to my calendar, certain things I ought to remember do not slip my memory. Thanks to to the tip-calculator in my cell phone I make sure that I do not tip less than I should due to a mathematical slip.

The examples might be challenged arguing that forgetting or miscalculating are not examples of blameworthy violations of one's duties. Other examples, however, can be found.

Moreover, the notion of an EAC is not incoherent. My word processor corrects many of my typos--sometimes without me noticing it. But my typing is autonomous. Notice that, thanks to it, I wind up typing what I wanted to type in the first place. Likewise, an EAC would be a crutch for my limitations; not a limit on my autonomy.

In general, auto-correct functions are meant to ensure that people behave in accordance with their preferences and knowledge. They are, in other words, a way of bridging the gap between competence and performance. So anyone who recognizes that one often fails to do what one prefers (through no lack of goodwill or ignorance), should be happy that EACs are coherent and that they (possibly) exist.

Note: It would be different if the machine Eric has in mind were one that "decided" for you what your moral values or preferences are. Or one that were to change the "objective" moral scale of the world. If this is the machine Eric has in mind, however, the auto-correct function might not be the right analogy.


First, I assumed, contra Philip's suggestion, that the EAC involves post-hoc correction and does not affect the actual deliberative process. If his version is what you had in mind, then I agree that it's coherent but I'm not sure what to think about using it. For now, I'll assume my original understanding of the mechanism was correct...

On my view, acting morally is about taking the action that is the output of proper deliberation. If I have not deliberated as well as I need to end with the optimal action w/r/t what's valuable, but could have, then I acted immorally. Nothing done post-intention that affects my action or its consequences can change this.

Now, you say:

"Because I aim to do only what's morally right, I turn on EAC. I am unsure whether to throw the trolley switch. I do my best, but I make the incorrect decision. EAC fixes that for me. So my action turns out right, and it does so because I aim to do only what's right. Isn't this good enough?"

On my view, if I really did my best, then I acted morally, even if my action was not optimal w/r/t what's valuable. So turning on the EAC did not make my action moral, it just made it better w/r/t what's valuable.

Now, you might still think the following: I had the option to turn on the EAC, which would adjust for my error w/r/t what's valuable. Given that I know I'm fallible, it would be immoral to leave the EAC off. So turning it on makes my action moral. So the EAC is an EAC(1) as well as an EAC(2).

The problem with this line of reasoning (which I don't mean to attribute to you, but was initially tempted by and so thought I'd head off at the pass) is that in this case it's the turning on of the EAC that makes my action right, not the autocorrection itself. So the EAC is still merely an EAC(2).

I was going to say more or less what Jussi said. But now it seems to me that John Brunero has demonstrated that the solution is so simple as to threaten the interest of the question!

Well, we might suppose that Ethical AutoCorrect has one blindspot: it cannot correct any errors regarding turning Ethical AutoCorrect on or off. Or we might just ask, if I turn on Ethical AutoCorrect, will it instantly cause me to turn it back off? So, too bad, John's beautiful pseudosolution fails.

This is why I said above that we are to imagine that Ethical AutoCorrect does not AutoCorrect your decision about whether to continue using it. Otherwise, Brunero's second point would kick in. (He's teaching a seminar right now, so he can't respond immediately. Ha!)

Another possible way to respond to Eric's challenge to David would be to understand "deliberation of the right kind" as being guided, not by a de dicto concern for rightness, but rather a de re concern for the actual right-making features. On this way of looking at things, trying your best to act rightly is no guarantee of having done so, if you're sufficiently misguided about what rightness consists in.

Hm, now I'm unhappy about the fact that I just claimed I was going to say more or less what Jussi said and he apologized for having incoherent thoughts. So I would like to be a little more careful.

Eric writes,

So my action turns out right, and it does so because I aim to do only what's right. Isn't this good enough?

I think it might be good enough, in many cases, but it's not good.
Let's say that when the action "turns out right", this is the 'external' sense of 'right'. (Kind of Kantian.) That is, intentions and motives aside, it is in outward form the same as the right action.
So my view is that it is not to a person's credit if the person does the externally right thing but not for the proper reasons. So there are two problems with the user of Ethical AutoCorrect: first, even though she might be doing the externally right thing because she wants to do the right thing, she does not seem to be doing it for the proper reasons. And second, because it is right is not, in any case, the proper reason. (The proper reason involves the right-making property, not the rightness.)

For example, suppose that Sobel becomes convinced that ethical egoism is true. But he knows that the rest of us disagree; we believe in the wishy-washy commonsense morality. He always does what is in fact the (externally) right thing, motivated by his desire to appear to us to be on our team (which he thinks is going to help him get ahead in life). So he does the externally right thing, and he does it because he is aiming to do what is right, and he is even motivated by the thought, "This is the right thing to do." But his acts are badly motivated, I say. The Ethical AutoCorrect-guided person seems to suffer from a similar defect.

Yeah, what Richard said.

What Richard says sounds right to me. But even if we suppose it is, I still might wonder whether a) the whole idea is coherent (it sounds like Santiago and gwern think it is), and b) whether one should switch on Ethical AutoCorrect. That is, I acknowledge it's better to act rightly without using Ethical AutoCorrect than to act using it. But those might not be the relevant options. I suspect that the worse one's own unaided ethical judgment, the more likely it is that one should use Ethical AutoCorrect (if the very idea is coherent).

When it comes down to choosing whether to turn on EA, I have a hard time being moved by concerns about moral worth. If I am wrong in thinking that it is permissible to buy a BMW rather than donating that money to famine relief, then I think I want to be corrected. Sure, my right action in this case might not be morally worthy, but I didn't wrongly fail to save many lives. Switch perspectives and think about this from the standpoint of all those who you act wrongly against. How hollow would it sound if you said to them, "Well, I didn't turn on EA because it is important to me that my actions have moral worth, and if i had done the right thing in this case, which I didn't, it would have been for the right reasons." You are basically telling them that it's more important to you that your actions have moral worth in a counterfactual world than it is that you avoid acting wrongly in the actual world.

Eric, have you thought about a version of EA that functions 'within deliberation' as opposed to after a decision has been made? Pereboom's latest incarnations of the manipulation argument involve just this sort of thing. Of course, in his example, the manipulators aren't correcting the deliberation but are ensuring that it's incorrect.

This is probably not what you are trying to get at, but has anyone suggested the obvious "Susan Wolf response"?

You should not turn it on because being ethical might be very demanding and rationally optional. Or rationally optional and costly for yourself and those you care about. (Or some variant in this vein)

Similar to Aaron's point: my iPhone regularly corrects me when I spell things (intentionally) in a novel or silly way. If EAC could not be switched off, any novel self-directed attempts at ethical judgments would be stopped by the program.

EAC would also seem to eliminate humor. If I attempt to make an off-color comment or joke, EAC would stop my attempt at humor. I'm guessing that many people would welcome that, but how funny would George Carlin have been if anytime his jokes were a little rude, they were self-corrected?

On the other hand, one interesting thing EAC could do would be eliminate implicit bias, or at least ethical actions caused by implicit bias.

I haven't read the who.e thing, but there's some nice discussion about moral decision making in AI in Wendell Wallach and Colin Allen's book Moral Machines.

Thanks, all, for your ideas so far.

My primary worry is whether the very idea of an Ethical AutoCorrect is coherent. If I want to type the word 'the', and I type T, E, then H, my computer displays the word 'the'. Have I typed the word 'the'? I think I have. Further, I think I have typed the word 'the' intentionally.

But if I want to do what's right, and so I decide to swim to save my drowning wife, but swimming elsewhere to save two drowning children instead would really be right, I don't think that EAC can make me save two drowning children intentionally. Perhaps it can make my body save the two drowning children, but it cannot make me do so intentionally. (And if I don't save them intentionally, my saving them is not right.)

I would like to be able to explain why using AutoCorrect does not make nonsense of the claim that I type the word 'the' intentionally, but using Ethical AutoCorrect makes nonsense of the claim that I saved the drowning children intentionally, and of the claim that I acted rightly.

Hi Eric,

That makes sense. Some wild speculative thoughts...

Wouldn't the same worries you have about ethical autocorrect apply in more fully analogous typing cases?

In the 'the' case the person has an understanding of what needs to be typed to produce 'the' - 't', 'h', and 'e' - and arguably the person has an intention to type each individual letter. So the autocorrect only brings about things she specifically intends to bring about (so to speak).

In the ethics case, it brings about things the person does not specifically intend (this is the mushy bit, I know). This suggests that an analogous typing case would be one in which I have a very indeterminate intention or a general intention but "defective" specific ones. Maybe I intend to write a great poem, but since I suck at specifying the details of what said poem would be, I am about to write a crappy poem and to type each specific part of it intentionally. So Poetry autocorrect steps in and makes a great poem appear on the screen as I type, or takes over my fingers and makes them hit keys I don't intend to type. In this case, I want to say I did not intentionally type a great poem (like in the ethics case).

This suggests a (rather mushy) explanation of why autocorrect does not make non-sense of the claim about intentional doing in the 'the' case -- namely that one's intentions are fine-grained and directed at typing each specific letter in that case, in a way that they are not in the ethical and poetry autocorrect cases.

Hope this is not redundant given what others have said.

Hi Eric -

I think that your device can perhaps be separated into two interestingly separate devices:

(1) An ethical oracle chip, which gives you access to the correct answer to any normative question (perhaps this is the idea of 'google morals'? I have not seen that paper)

(2) A continence device, which overrides any weak-willed action that you might be about to perform.

I think each of these devices raises fascinating questions, but they are separable. (I did an informal presentation for undergraduates about the continence device once, so have thought about it more).

As has already been noted, on some (normative ethical) views, the oracle might be self-undermining ("Performing the right act requires that you don't listen to me!", it might say). On other (metaethical) views, it might make no sense (e.g. views which deny there are any ethical facts for the oracle to state).

The continence device strikes me as interesting because

(a)on some views, inserting it might undermine your status as a normative agent: e.g. Korsgaard's claim that for something to be normative for you, you need to be able to fail to follow it. No longer true for our ought-judgments, once we have the device.
(b) reflection on the desirable features of the continence device /might/ show that we have reason not to care about normativity in the sense Korsgaard is talking about.
(c) in the absence of the oracle device, you might worry that the continence device was dangerous because it might prevent you from exercising virtuous weakness a la Huck Finn.
(d) reflection on the continence device /might/ lead some of us to realize that we treasure the delicious ability to occasionally shrug off our better normative judgment, something that can be hard for some normative theories to cope with, I think.

It seems to me that this is essentially what our conscience is, or is supposed to be; a reminder of what is ethical, a check on our temptations to be unethical. Except that it is 1) not guaranteed to actually correspond to actual morality in any given individual, and 2) is not always heeded. In general, if 1&2 were fixed, it seems to me that this would be a good thing, and is not at all incoherent. Though Tristam's comment from Korsgaard, that normativity must be something we could fail to follow, is interesting; it seems in tension with the general Kantian (& Korsgaardian) idea that acting ethically makes you *more* of a person, not less of one (say, on the grounds that your freedom is restricted). I've always been suspicious of the original Kantian idea of a "holy will" which always acting ethically and cannot help but do otherwise.

At a first pass on reconciling these, I'm tempted to say that, given our instantiation in the physical world, and supervenience upon fortuitous combinations of particles, both we and the "conscience/ethicschecker" mechanism still *could* fail. We need to be aware of the fact that, at the very least, cosmic rays could burst into our brains or checkers and send the mechanism off its rails, so we did something unethical. To be a fully ethical being is not merely to have the percentage of one's ethical behavior reach 100% (which could occur by chance), but to be prepared to resist pertubations of one's ethical tendencies by "impulsive" causes, whether internal or external, if and when they arise. Of course, having such a back-up or standing disposition can simply be construed as part of the means by which we increase the percentage of ethical behavior, and can be built into the machine. But we must also endorse the machine being built /in just this way/, rather than merely accept the output of the machine blindly, or because we cannot help doing so. Of course, if the machine is part of us, that helps--or if it can become part of us, in a sense, through our understanding of and endorsement of its workings, as part of our Korsgaardian "practical identity."

I think this is in at least partial agreement with Richard's point that it's not enough to passively accept whatever the machine forces you to do. If you wholeheartedly endorse it via an understanding of why it dictates as it does, knowing that and why its requirements are right, and happily acceding to its correction of your behavior--knowing your moral frailties without it--then we can properly say that you have intentionally done what the machine makes you do, in a sense in which you don't if you resist (with futility) its corrections.

I don't think I've fully reconciled the tension here, but it's an interesting challenge to think about this.

If I understand the example properly, concerns about moral worth don't seem to give you any reason to eschew EAC. At least not if you assume that actions that are in conformity with duty always have at least as much moral worth as alternative actions that aren't. EAC doesn't act in any case in which you would have done the right thing without it, so your actions in those cases have no less moral worth than they would have anyway. It only makes you act in conformity with duty in cases in which you wouldn't have otherwise, and so any action that you perform because of it will always have at least as much moral worth as what you would have done without it.

The comments to this entry are closed.

Ethics at PEA Soup

PPE at PEA Soup

Like PEA Soup

Search PEA Soup


  • Unless otherwise indicated, the views expressed in any given post reflect the opinion of only that individual who posted the particular entry or comment.