When reviewers go bad

I’ve discussed the peer review system before (see the accompanying repost and a more recent lament), but recent posts by Lab Lemming, Yami, Am I A Woman Scientist and Thermochronic have inspired me to think some more about a common scourge: the (usually) anonymous reviews that are perhaps a little more negative than is strictly justified. As Lemming puts it:

…the most common complaint about “bad” reviews is not the amount of criticism. Rather, it is generally the lack of science on which that criticism is based. Most people get over fair but harsh reviews; rather it is the inane, mean-spirited (and always anonymous) reviews that are really frustrating.


A typical scientific paper generally contains some new data, and your interpretation of them; and it is the latter, rather than the former, which often causes the trouble. There may be no obvious problems with your methods or results, but if they cast doubt on some pet theory which the reviewer has previously published on, then you’re not in for an easy ride. And if certain ideas and interpretations are unpopular with the small cadre of researchers at the top of your subfield – who also happen to be the people most often solicited for reviews – then you’re instantly in trouble, because even if you don’t know who they are, the editor does, and their opinion obviously carries a lot of weight.
It’s easy to blow this ‘Gatekeeper Problem’ out of proportion: we’re all quite proprietary about our own ideas, so it’s easy to feel that you’re being unfairly treated when they’re trashed by some schmuck who won’t even do you the courtesy of looking you in the eye whilst they do it. We should remember that contentious interpretations are often contentious for a reason, which might be more apparent to people with a broader overview of the field than us lowly junior researchers; and remind ourselves that those who carelessly tread the path of excessive self-victimhood all too easily stumble into the realm of crankdom. But even so, there’s a fine line between rejecting a new idea because it is clearly flawed, and rejecting it because it doesn’t fit with your own preconceptions, especially when you’re talking about complicated scientific problems where even the favoured interpretations may not be quite as all-singing and all-dancing as some would like to pretend. It’s just the old errors and uncertainties have become features; old friends that people are loathe to replace with brash new ones that might not laugh at their jokes, and tread mud into their intellectual carpets.
This is where I get frustrated. I’ve always believed that the publication of your work should mark the start of the peer review process, not the end: you put your data and ideas out there, for other scientists to see and scrutinise and argue with you about. As in the blogosphere, links – or citations – are the currency: if you’ve made a good case, some people might use your hypotheses to inform their own work, or take them and test further implications. In this way some ideas will gain more support, get cited and flourish, whilst others don’t pan out and eventually fade away. The more people who are involved in this process, the better, so the initial reviewers of a submitted paper are doing everyone a disservice when they torpedo it not because of flaws in the dataset, but because they don’t like the author’s take on what it means*. In most cases, that debate should take place within the wider scientific community, rather than in private between the authors and one or two people.
Suggested ways of negating the effects of overzealous anonymous reviews have thus far centred on the question of anonymity itself. Should scientists be required to sign their reviews? As Thermochronic points out, less senior reviewers might not feel comfortable letting potential future employers know who’s been nitpicking their opus (for the papers I’ve reviewed, I’m 50/50 between declaring myself and not, and the decision in the latter case was entirely based on that very concern). Or perhaps reviews should be double-blinded, with the authors’ names be withheld from the reviewers? Nice idea, but as Lemming argues, in most cases it doesn’t require a supersleuth to work out whose research you are casting your eyes over. He also suggests that we make signed reviews part of somebody’s application for tenure; this might encourage more young scientists to participate in peer review, diluting the influence of the Gatekeepers. It’s a good idea, although to be effective journal editors need to play ball and actually ask us junior types a bit more, and you still run into the problem of the little fish being a bit jumpy about publicly criticising the big fish.
Personally, I don’t really mind anonymous reviews if we can break the correlation between anonymity and hatchet-job reviews which are not motivated purely by scientific concerns. And I think the only way to do this is to open up the process somewhat. In my ideal system, papers should only be rejected outright if the reviewers identify methodological or analytical errors with the data. If a reviewer has serious problems with the interpretation, their comments should be published together with the paper, alongside any rebuttal or response from the authors.
The advantages of this system would be that arguments over ideas are out in the open, where they belong, but also that it blunts the effectiveness of anonymity in the case of the Gatekeepers, whose opinions and prejudices will be pretty well known within the scientific community (for example, we have a pretty good idea of who the anonymous reviewer in my recent blood-letting is) and hence might slightly moderate their zeal, but it does not remove its protection from junior researchers, who will probably be less easily identifiable from content alone. And if you take the additional step of making these published reviews citable (as comment-and-reply submissions are), then people will have a positive reason for signing their name.
Such a system, is not without it’s difficulties, of course: for example, there are always going to be cases where it is difficult to separate out criticism of methods from criticism of the interpretation. However, it turns out that attempts are underway to implement a publishing regime which follows a broadly similar philosophy to the one I outlined above. Copernicus, the EGU journals publisher, is using a two-stage publication model (thanks to James for the heads-up on this):

In the first stage, papers that pass a rapid access peer-review are immediately published on the discussions’ website. They are then subject to interactive public discussion, during which the referees’ comments (anonymous or attributed), additional short comments by other members of the scientific community (attributed) and the authors’ replies are also published.

In the second stage, the peer-review process is completed and, if accepted, the final revised papers are published in the real journal online or in print or both. To ensure publication precedence for authors, and to provide a lasting record of scientific discussion, the discussion and the real publication are both ISSN-registered, permanently archived and fully citable.

The PloS journals also seem to be based on a similar system:

PLoS ONE employs a two-stage assessment process. Pre-publication peer review concentrates on objective and technical concerns to determine whether the research has been sufficiently well conceived, well executed, and well described to justify inclusion in the scientific record. Post-publication, papers are opened up for interactive discussions and assessment in which the whole scientific community can be involved.

This seems to be a much more sensible approach than the Nature experiment, which decoupled the “open reviews” from the more traditional system. Of course, Nature‘s publication criteria, which have always included less objective considerations, are perhaps not overly compatible with a more open review system.
Of course, this is all very interesting, but we can’t ignore the elephant in the room: unless the big boys like Elsevier adopt such systems, or we all put our money where our mouths are and opt for the more innovative journals, nothing much is going to change.

*As an aside, I’m starting to think that far too many people concentrate on the former rather than the latter when in their reviews. I’m currently reading a lot of papers containing Archean paleomagnetic data from South Africa, and in more than one case the data quality is so bad that any interpretation is essentially no better than hand-waving.

Categories: academic life, publication

Comments (5)