Tuesday, 21 May 2013

Peer review, citation inflation and 'power citing'

This month's discussion began as a conversation about the value and validity of peer review, but drifted into a  reflection on citation inflation.

There was a clear divide in the group.  Those yet to publish in peer-reviewed journals were more trusting of the peer review mechanism than were those of us who had, at some time or other, had to cope with reviewers who clearly failed to understand what they had read and yet still felt qualified to demand changes.  Or worse still, reviewers whose remarks were not in any way helpful and appeared gratuitously vitriolic.

Unfortunately, such behaviour is an unpleasant side effect of the anonymity of peer review.  Anonymous peer review is often thought to be a long established practice but when, several years ago, I tried to find out just how far back it goes, I was surprised to discover that nobody appears to know, but it was probably introduced after World War II.

Angharad mentioned that, in her experience of being on the editorial team for Library and Information Research, it was not uncommon for reviewers to suggest to the author of the paper under review that it would be improved if it referred to the reviewer's work.  That led to a shift in topic and we began discussing the reasons why papers are cited.

I wasn't alone in being frustrated by the growth in number of articles cited in papers nowadays.  As an example of the growth, when I tried looking through early issues of the Journal of Documentation, I discovered that, throughout the forties and fifties, hardly anyone cited anything.  As is clear from the graph below however (plotted using data froSingh, Sharma, & Kaur 2011), things are very different nowadays. 

No doubt some of the work cited in a paper is genuinely useful to the author of that paper. However, I've been guilty in the past of including papers just to show that I was aware of them, rather than because they added much to my thinking or understanding.  I know from talking to other researchers that this is not uncommon.

One practice I would like to see introduced is that of power citation.  As well as nominating keywords from their article, authors could nominate up to five references which they found to be particularly valuable when compiling their article.  It could act as "edited highlights" of the references and provide guidance to anyone wanting to know where to begin if they wished to read around the article.  If the practice became widespread, it may also prove a useful bibliometric tool.

Maybe Library and Information Research can be persuaded to pioneer the practice.

Madden, A. D. (2000). Comment When did peer review become anonymous?. Aslib Proceedings 52(8) 273-276 
Singh, N. K., Sharma, J., & Kaur, N. (2011). Citation analysis of Journal of Documentation. Webology8(1)


  1. Thanks for the post, Andrew, it sounds like an interesting discussion! I agree with the observations, and also would add that doing a thorough article peer review , that is written in a constructive and comprehensible way, is very time consuming (I've already spent hours on one article I'm refereeing at the moment and it isn't finished yet...) On a sort of related topic, I just came across the DORA website http://am.ascb.org/dora/ which is challenging the idea of using citation impact to evaluate research

  2. Sheila - Many thanks for your comment. Because some reviewers put in a great deal of effort, then peer review (for all its faults) remains a useful system. Unfortunately (to misquote Mr Spock) the abuse of the few outweighs the efforts of the many.

  3. Hi Andrew, it was a really interesting discussion and the post is a good summary. However, I wanted to clarify that my observation regarding reviewers suggesting that authors should refer to the reviewer's work didn't come from my work with LIR but from other people's anecdotes about their experience of peer review in other disciplines. Although it's not necessarily a negative thing - after all, if the purpose of peer review is to get detailed comments from experts in the field, they may well be justified in referring authors to their own publications!

    An additional point I'd like to make is that it's fine to disagree with a reviewer's comments, and not to make changes they suggest, as long as you explain this when submitting revisions. Ultimately, it is the editor (rather than the peer reviewer) who makes the final decision about whether to accept a paper for publication.

    I think the Altmetrics manifesto is a good starting point for improving evaluation of the impact of research: http://altmetrics.org/manifesto/

  4. Oops - apologies for misrepresenting you Angharad. And thanks for the point about it being OK to disagree with reviewers. I've done that in the past and had reason to be grateful for sympathetic editors. Also - the Almetrics manifesto is well worth a look.