Entries tagged with “journals”.


Vincent Calcagno has a fascinating piece up at the LSE impact blog, in which he looks at the review and publication histories of an absolute pile of articles.  There are whole set if interesting findings there that are well worth the read.  For example:

But, surprisingly, we found that about 75 per cent of all articles we declared to have been submitted to the publishing journal on first intention. Even assuming that, for some reason, authors were less likely to respond in the case of a resubmission, we still find that a majority of published articles are first-intent submissions. This suggests that authors are, overall, quite apt at targeting a proper journal and, conversely, that journals make sure they have a sufficient public: no journal was found to be entirely dependent on resubmissions from others.

However, the finding I found most interesting was this:

in a given journal and a given year, an article that had been resubmitted from another journal was on average more cited than a first-intent submission. Resubmissions were less likely to receive zero or one citation (about 15 per cent less, controlling for publication year and journal) and more likely to receive several (e.g. 10 and 50) citations, shifting the mean to higher values. This intriguing result suggests a “benefit of rejection”. The simplest explanation would be that the review process and the greater amount of time spent working on resubmitted manuscripts does improve them and makes them more cited, although other mechanisms could be invoked.

I wonder, though, if there is another factor that should be considered.  Peer review is inherently conservative – there is a lot of thought policing that goes on through this process (I’ve gone on about this before, here and here).  I wonder how many of the “resubmissions” were rejected not because of insufficient quality, but because they were doing interesting work that threatened one or more reviewers, leading to rejection.  This makes sense, as new and edgier work will eventually get cited more than middle-of-the-road replication of old results – at least, that has been my experience.  So perhaps Calgano has given us empirical evidence for the intellectual policing function of peer review.

Colleague Ben Neimark at ODU recently asked me a tough question: “What makes for good (helpful to get published, strengthened,  intellectually creativity, etc.) peer review?”  I figured this might be of wider interest to academic colleagues, as well as those who see the entire academic publishing world as somewhat opaque.  So . . .

I think the challenge in producing a good peer review is to balance its dual imperative .  There is the part of peer review that ensures quality and offers constructive criticism (and I have received some in the case of my current livelihoods work – see here, here and here -, and have had some reviewers offer great stuff in the past).  Then there is the disciplinary policing that goes on through peer review, where reviewers don’t examine the quality of the data or argument, but simply argue against it because it challenges convention (which the reviewer likely belongs to or established) – see my comments about reviewer 1 at the bottom of this post.  This second function makes innovation very challenging unless you are very, very hardheaded (which I am).

In a nutshell, though, I think good peer review is that which looks at a paper for its stated aims and evaluates

  1. are those stated aims actually new and interesting and
  2. did the paper achieve the stated aims.

If standard 1) is not met, a good peer reviewer should be able to suggest where the real contribution of the paper lies – i.e. by suggesting literatures into which the author should place the manuscript.  If standard 2) is not met, the reviewer should explain exactly how and why this happened, and what sorts of remedial steps might solve the problem(s).  That is my minimum take . . .

I’m happy to hear the opinions of others . . .