Many of you will know that John Stuart Mill advocates a scheme whereby college graduates, and the more educated more generally, would get more votes. Like some universities, he even accepts work experience in leiu of formal education:
If every ordinary unskilled labourer had one vote, a skilled labourer, whose occupation requires an exercised mind and a knowledge of some of the laws of external nature, ought to have two. A foreman, or superintendent of labour, whose occupation requires something more of general culture, and some moral as well as intellectual qualities, should perhaps have three. A farmer, manufacturer, or trader, who requires a still larger range of ideas and knowledge, and the power of guiding and attending to a great number of various operations at once, should have three or four. A member of any profession requiring a long, accurate, and systematic mental cultivation—a lawyer, a physician or surgeon, a clergyman of any denomination, a literary man, an artist, a public functionary (or, at all events, a member of every intellectual profession at the threshold of which there is a satisfactory examination test) ought to have five or six. A graduate of any university, or a person freely elected a member of any learned society, is entitled to at least as many ("Thoughts on Parliamentary Reform").
Virtually everyone today agrees that this is a terrible idea. But why?
Warren Quinn’s puzzle of the self-torturer is supposed to show that cyclic preferences can be rational, and that, in cases where they are, rationality can require resoluteness so that the agent does not end up with an alternative that is worse than the one with which s/he started.
As Quinn makes explicit, his concern is with instrumental rationality. It is thus natural to interpret Quinn’s use of “worse” as “worse, relative to the agent’s preferences.” But how is “X is worse than Y, relative to the agent’s preferences” to be understood when X and Y are part of a preference cycle?
Do you enjoy puzzles? Yeah? Well then, let me share one with you.John Basl (Northeastern University) and I have had some fruitful conversations about it; and we have some views about how to address it (and some views about how not to); but in the spirit of collective inquiry and intellectual theft let me take this opportunity to solicit your initial responses.
The puzzle might be construed either in terms of rationality or theoretical justification, but it is roughly as follows:
Why are we permitted to revise our moral/normative/evaluative beliefs in light of non-moral beliefs but not vice versa?
Indeed, while it’s clear we are often guilty of sub-consciously shaping the facts to fit our evaluative commitments (e.g. the powerful correlations between political ideology “climate skepticism”, 911 conspiracy theories, and beliefs about the president’s religion and birthplace), we all disavow this a proper way to form our non-moral beliefs.As obvious as this may seem, the puzzle is how to best explain why this is so and then sorting out what the implications may be for meta-ethics, moral epistemology, and even epistemology more generally....
I've got, well, a worry regarding the "necessity" requirement on the legitimacy of self- or other-defensive force. I don't really work on this stuff, so it's entirely possible that there's an easy, pat answer to this worry. Anyway, I'd be interested to hear what you all think.