Sunday 21 July 2013

Peer review, failure and the benefits of persistence

Earlier today I found myself reflecting on our recent discussion on peer review after I came across the reviews that accompanied the rejection of a paper I submitted a while back (see below).  The second review, though somewhat scathing, at least offered a few pointers to what the reviewer thought was wrong. The first reviewer obviously did not suffer from self doubt.  Her/his comments were wholly unhelpful.

The paper was later published (albeit in a less prestigious journal) and was recently nominated for an outstanding paper award.

It could well be argued that I had submitted to the wrong journal.  Clearly the audience of The Electronic Library was more receptive than the reviewers whose comments are published below.  However, it would be helpful if reviewers could be reminded of the value of supporting their statements with evidence, and discouraged from presenting opinions as facts.  Referee 2 asserted that "The presentation and argumentation is poor by any standards".  This was a rather arrogant extrapolation.  The reviewer's standards were presented as indicative of all standards.  Such practice is common amongst journalists, but should, I feel, be avoided by academics, even under the cloak of anonymity.  

Fortunately for me and my co-authors, Referee 2's belief that her/his standards were universal proved not to be the case.

Referee 1
This is very weak in many ways: the related work is unacceptably shallow, the experiment itself may be fatally flawed (I cannot tell from the current description but it feels that way) and the results are trivially presented. Even if they fixed the above I think the design and their current approach to investigating this issue may be problematic for publication; we already know a lot about relevance criteria and evaluative judgments and I doubt their approach is going to give much that is new. This almost assumes that nothing has been done in this area and their experiment feels very naïve. The only (relatively) new aspect is the meta-cognitive approach but they need to do something sensible with this rather than treating it as a buzzword. 

Referee 2
I recommend that this paper not be accepted by the journal. The paper has many serious problems that need to be addressed and lead the reviewer to reject the paper.

The presentation and argumentation is poor by any standards, and well below what would be expected of a journal like [name of journal].The paper needs a good abstract that provides a good summary of the paper. The current abstract is poorly written, and gives little insights into the key findings of the study. Most readers will only read the abstract and not read the complete paper unless the abstract is good.

The introduction section is weak: the paper is not clear as to the research problem, and why it is important and significant. The paper has no clear theoretical framework, nor an adequate literature review. Research questions not clearly spelled out.

The research design section of the paper is poor for an empirical study. This section is inadequate and does not reach the level that another researcher could replicate the study. Needs a data collection section and a data analysis section.

The Results section is unclear and not comprehensible. 

The paper has no Discussion section that discusses the key findings in relation to previous studies. Where are the limitations and implications?

Much of the Conclusions section needs to be in an earlier section. No further research is discussed.

The references section is messy, not in alphabetical order and has missing references.

Overall, the quality of this paper is poor in structure and content. The paper lacks a theoretical framework, is a poor empirical paper and is embarrassingly weak for a paper submitted to [name of journal]  I recommend that the paper be rejected.

Tuesday 9 July 2013

Introducing KNIME (by Edmund Duesbury)

This blog entry could revolutionise the way you do information-based research!

KNIME (pronounced [naɪm/] and rhymes with "time") represents an exciting step forward in programming and the development of scientific tools for information sciences.  KNIME is a freely available, open source workflow tool which allows users, in a few clicks, to develop "programs" which can 
  • generate statistics from given input data,  
  • perform visual analyses and
  • generate graphs and web-based reports.

KNIME has built into it a wide variety of program libraries. Elements of these can be dragged and dropped into a workspace, enabling users to create a program in mere minutes.  These are not only useful, they look good as well!  Because KNIME makes use of visual representations of program components, it's incredibly easy to use.  There is no need to mess around with UML or pseudocode, and planning of workflows becomes straightforward.

In this session, James Wallace and I will try to introduce you to KNIME, and give an example of how to obtain simple statistics from some input data.  We’ll also demonstrate a classification method for plants which anyone can implement and which is easy to understand. Please feel free to come along, ask questions, and have a play with KNIME.

More information, including download links, can be found at: http://www.knime.org/knime