Imagine a world where, instead of journals there are assessment houses. You send your paper to an assessment house of your choosing, just as you now send your paper to a journal of your choosing. That assessment house, possibly after a round or two of referee feedback and revision, assigns your paper a grade. You then may post that paper with the grade from the assessment house. Perhaps the assessment house notes all the papers it has assessed and their grades. Existing services such as Phil Papers or Academia.edu provide a stable online place to archive one’s paper in a citable format. Papers there are available free to all. Perhaps existing editorial boards for journals might serve, initially at least, as the assessment houses.
The advantages of such a system are that 1) more people would have access to more scholarship, 2) the process would cost less money, 3) since the average paper would be refereed much less frequently this would require much less overall refereeing which would allow referees to focus more carefully on the fewer jobs they take on and make it easier to quickly find willing referees who are especially well suited to assessing the paper at issue, 4) this would make scholarship available more quickly, and 5) graduate students and junior faculty could be more secure that papers completed close to tenure decisions or when they go on the market would be taken into account.
The main benefits of the journal system, that it provides a rough assessment of the quality of papers and a way to access them, seem as if they are secured by this alternative. Indeed, arguably the new system is superior at this because it can express more fine-grained assessments than accept or reject.
There would be an issue of where the money would come from to support assessment houses. Currently, subscriptions provide income for this. But this should be a solvable problem given that the system would require less money overall. Perhaps universities might start diverting a portion of their library funds to pay for such a system, initially dropping subscription to the more expensive journals so as to support this alternative.
I am sure there are disadvantages of such a scheme. Perhaps, for example, the loss of physical journal issues would be a big deal. But the real issue is whether the imagined system is inferior to the status quo. I would like to think with you about whether the imagined system would be inferior. Certainly, many questions would have to be worked out. Can one have one’s paper assessed by more than one assessment house? Can one withdraw a paper after submission if one does not like the referee comments and not have the paper assessed?
David,
I certainly think that something along these lines is a good direction to go in. I'd like to hear more about why you prefer (if you do) a centralized grading system over crowd-sourcing. We might (even more cheaply!) just let everyone post papers to a central location and institute a voting system, say either grades or Reddit-style up/down votes?
Posted by: David Faraci | May 19, 2015 at 12:53 PM
On Facebook I have been getting a variety of comments, both positive and negative about the proposal. Let me both encourage people to discuss the issue here and try to respond to some Facebook comments here.
Here is one main concern I have been hearing: the proposed system turns a low stakes journal assessment into a high stakes assessment. Because there are lots of good journals, having a paper rejected from one does not hurt one too much. If you only get one or two bites of the apple from assessment houses, on the proposed system, then one or two false negative assessments hurts one more than a false negative assessment under the current journal system.
This is a real concern, the one so far that I find most worrying. But here are some countervailing thoughts. Grad students within two years of the market and junior faculty within two years of tenure are already having high stakes journal assessment regularly. It takes so long under the current system to get a response that people in those situations must think carefully about where to send their paper s. Even if she quite likes her paper, the late-stage grad student must weight whether to send the paper to a high-end journals first, which has a high risk of rejection, against sending it to journals with a higher rate of acceptance. The R and R paper ultimately rejected is devastating at this stage. Much better, for people at this stage, to get a paper assessment equivalent to “almost acceptable to the high end journal.” And much fairer to not create such a powerful incentive to not send a student’s best work to the best journals so that it is not eligible for the high assessment it perhaps deserves. In other moods I have toyed with recommending that there be, for example, an Ethics 2 for papers nearly accepted by Ethics.
Additionally, keep in mind that what feels like a false negative these days (good paper rejected) need not be far from the mark of an accurate assessment. The danger of the new system is that the ranking is way wrong and that is rarer than deserving papers being rejected.
Posted by: David Sobel | May 19, 2015 at 12:56 PM
David: I prefer a system that has those who have expertise in the area of the paper to do the assessment. If on the alternative system anyone can assess, including friends and enemies of the author, or people who are not at all up on the relevant literature, then I would prefer the system I propose to that.
Posted by: David Sobel | May 19, 2015 at 12:59 PM
This is a nice idea, David. One disadvantage: If we just post papers on PhilPapers or Academia.edu, we won't have copyediting or typesetting. This will downgrade the reading experience, which is already an issue given the pressures to publish quickly rather than spend time crafting one's paper to make for an enjoyable read (in addition to having content that makes an important contribution to the literature).
And yet one can get many of the benefits of your proposal by just following in the footsteps of Philosophers' Imprint. In fact, your proposal is quite similar to their vision of the future. Yours just does away with the more familiar format of a journal. The PI vision is much better in practical terms too. Despite being politically progressive, Philosophy doesn't change itself very quickly and seems to fear change. Online open access journals (like PI) are already having a difficult time getting respect from some deans or Dept chairs. That will pass, but that's largely because it still looks like a traditional journal. So I'd also worry that departing too far from the status quo might make for a proposal that will fall stillborn from the presses (or lack thereof!).
Posted by: Josh May | May 19, 2015 at 04:35 PM
I disagree with the wisdom of this particular proposal for reasons I noted on Dave's Facebook page, which I suppose I could reproduce here. But I do want to give credit to Dave for thinking creatively about some solution to the problems that are currently in place at the journals which seem to me to get far less attention than they merit.
Posted by: Dale Dorsey | May 19, 2015 at 04:50 PM
Dale: Please feel encouraged to repeat your concerns here so non-Facebook folks can hear your thoughts.
Posted by: David Sobel | May 19, 2015 at 05:35 PM
Some may not be familiar with the Mission statement from Philosophers' Imprint. I have been very influenced by their statement. To see it, scroll down a bit on this link: http://www.philosophersimprint.org/about.html
Posted by: David Sobel | May 19, 2015 at 05:41 PM
Here's what I wrote on Dave's Fb page. I now have some reservations about concern number 4, but the others seem to me important.
----
Hi David, While I'm sympathetic to the concern for an alternative to our current system (which sees two of the top journals literally shut down for six months out of every other year; a less than ideal situation, surely) I have quite grave concerns about the proposal, as compared to the current system. I'll just start listing them here.
1. One concern has been mentioned already: false negatives. One good thing about our current system is that there are lots of very good journals a publication in which would count positively toward tenure. You get a bad rejection at one, send it to the next. A bad rejection at another, send it to a third, and so on. If it's a good paper it'll eventually get picked up. But that's certainly not guaranteed to happen at any particular limit. And I suspect that your system requires a limit (already mentioned).
2. Let's say I'm an assistant professor at a major research university, to get tenure under your system I would need to have several papers with high grades. But we all know that the peer-review system tends to work against papers that defend unorthodox or heterodox positions. Under the current system, I can keep sending it around, knowing that it may take a while to find a sympathetic reviewer, or perhaps I could publish it in an edited volume. But it's hard to see how a tenure committee, knowing that such an assessment system exists, would count unassessed papers toward tenure. The result, or so it seems to me, would be a shift toward more conservative, less daring papers written by junior folks. Now that's already happening, of course. But this proposal would seem to magnify that effect.
3. It would appear that there is a serious problem of translation in what the grades might mean. Let's say I'm on a tenure committee: I think that a person ought to have six "A" grades. But then I'm an assessor. I get a paper that would probably not merit publication in Phil Review, but maybe Phil Studies. I give it a "B". You get the problem. Keeping clear on what these grades mean would be very difficult. Not saying that the current system doesn't have the similar problems, but once again this would be an added wrinkle that would likely be to the detriment of junior folks and their ability to make a living.
4. What incentive would there be to run an assessment house? I take it that part of the fun of being a journal editor is to collect and present those papers one feels match your interests and the cutting edge of one's discipline. There's a certain pride in putting out an issue presenting for the public simply terrific work, or work one thinks is excellent anyway. (Same for associate editors, I would expect though maybe I'm wrong.) But if I'm a glorified grader, I doubt the same incentive would exist.
5. Connected to the above. How many assessment houses do we need? At least as many as the limit. But then what happens if we encounter corruption, cronyism, excessive pedigree-sensitivity, and so on, at one or more of the assessment houses? I ask this because this appears to be a serious problem in the current system. But the cool thing about the current system is that there are a bunch of journals, one can just send it out to the responsible ones. (Perhaps that's too breezy, but the general point is that under the current system one bad apple doesn't spoil the bunch.) But given the limited number of assessment houses sure to exist, any problem of the above kind is sure to magnify the concern about false negatives.
I suppose many of my concerns can be summed up like this: in the current system there are lots of options for junior folks. Sounds like under your system there are likely to be far fewer, whether this is given the limit, or the lack of incentive for editors, and so forth. But I think that's a big serious problem. I don't know enough to tell you whether or not the disadvantages of the current system are so serious as to outweigh those concerns. But if I were a junior philosopher, especially someone who works in areas outside of the M/E/E/Meta-E core, especially someone with a lower-status pedigree, or working at a lower-status university in a lower-status position, I would view such a system with serious trepidation.
----
Dave, on Fb, suggested a nice response to concern number three, i.e., that if there are problems here it's on the tenure assessors. And while I fully agree with that, I think we should not believe that simply instituting a new style of paper reviewing is going to render vicious philosophers virtuous. And adding complications and the possibility of this sort of error would seem to make the situation more difficult given the current existence of lazy or unscrupulous assessors.
Posted by: Dale Dorsey | May 20, 2015 at 04:13 PM
I quite like this. It seems to me that a lot of the "false negative" problem can be solved by the fact that people don't have to post their papers with the grades they got--they can revise them and try to improve the papers, which is generally a good thing.
Sometimes I think we should just all post our papers, ask people for feedback, circulate them to people who are interested, and then when it comes time for tenure assessment the assessors should read the papers and see whether they're good!
Posted by: Matt Weiner | May 20, 2015 at 09:58 PM
David,
In an ideal world, papers would be reviewed by only those who are really qualified to review them,* and who will take that job seriously. The question is whether this ideal is better approximated by a top-down system like yours or crowd-sourcing system like the one I'm proposing.
Just a couple of preliminary thoughts: First, it seems to me that the current system does far from a perfect job of this. Even top journals ask people to referee papers when they have little to no evidence that those people are experts in the relevant subfield. Nothing in your system seems prepared to improve upon this. Indeed, if Dale's worries about incentives is at all legitimate, it might turn out to be worse.
In contrast, the hope with a crowd-sourced system would be that with a large number of participants, most of whom would have enough integrity to rate only those papers they are qualified to rate, and who would do their best to set aside personal biases, the trolls get swamped. There is at least some evidence (e.g., perhaps, Amazon) that this can work.
Finally, the above suggests a major advantage of the crowd-sourcing system with respect to the worry about false negatives, which is that crowd-sourcing is dynamic: A paper's grade is not "locked in" by one review.
*It's actually not clear to me that this should always be people who work in the relevant subfield; but I'll set that aside here.
Posted by: David Faraci | May 21, 2015 at 04:17 PM