Wednesday, October 18, 2017

Some common-sense advice regarding responding to peer-reviewers

Editing and (especially) reviewing are mostly thankless jobs which take time and attention from the busy schedules of reviewers. Facilitating their work (or at least not making it harder than absolutely necessary) is therefore a very important way to improve the odds of a favourable decision. As such, it is absolutely crucial that every reviewer/editor comment be acknowledged and addressed. Authors may always decline to make a requested change by presenting their reasons, but failing to mention any one of the reviewer's comments (even if to reject their pertinence) may come across as evasive and less than fully transparent. Moreover, it is one of the worst things an author can do to their chances of a favourable outcome: at best, it can be taken as a passive-agressive way to signal discontent with "dreaded reviewer#3". At worst, it can be mis-interpreted as an attempt to hoodwink editors.  In any case, it increases the probability of tipping the editor's judgment away from a positive decision.

A few hints to help reviewers appreciate your response:

  • When you prepare your rebuttal,  provide the full text of all of the reviewers' comments to the initial version of this submission, interspersed with your detailed replies to each point (preferably in a different font, for ease of reading).
  • Some journals request re-submissions to be accompanied by a copy of the manuscript file with highlighted changes. In that case, do not highlight those changes manually: use your word-processor built-in "track changes feature" instead, to compare the initial submission to your modified manuscript.

Friday, July 7, 2017

Today I became dreaded reviewer #3

I am now writing a referee report. I usually frame my comments diplomatically and try to be constructive (you will have to take my word for it...). Unfortunately, my first comment to these authors is uncharacteristically harsh, and I wish I had not needed to write it:
"I do understand that productivity and impact metrics like the number of citations, h-index, etc. are wrongly used by intitutions and funding agencies to measure research productivity, and that scientists are implicitly (or explicitly) pressured to inflate them. I cannot, in good conscience, agree with that practice but would have kept silent if a manuscript cited a couple of papers by the authors in the introduction. However, in this manuscript 46 references are cited, of which 23 (number 8-11, 15-19 , 21-23 , 34-42 , 44-45) are from the current authors. None of these 23 citations refers to specific results from those papers: they are rather cited as examples of well-known facts which either require no citation or should cite seminal papers/reviews in the area. I will not accept this paper in any form, for publication in this or any other journal, if those references remain."

I am afraid such comments to authors and editords must become much more common to stop the continuous gaming of the system. As long as metrics are used for ends they were not designed to, authors will (more or less grudgingly) try to game them, if only to ensure that they do not "fall behind" in comparisons with colleagues who feel even less compunction to game. Race to the bottom, and all that...