Getting a bad press
The first time I was ever subjected to any kind of online review was when I bought something on eBay. The only time I have had feedback that wasn't positive was following my 8th transaction.
The item, when it arrived, looked tacky. Since I had only paid £5 for it, I couldn't complain much. What got me was the cost of postage. The stamp cost around £1.60 and the packaging didn't look that expensive. Since I was charged £15 I figured that the company were paying themselves around £13 for the onerous task of packing and posting. A pretty good hourly rate.
I gave the vendors a 'neutral' rating. They weren't happy. Their feedback score of 99 and a bit percent positive (based on thousands of transactions) was reduced to 99 and a slightly smaller bit positive. They responded by giving me a neutral. Since I only had 8 feedback scores, this one stuck out. Accompanying it was the message: "There was no need for neutral feedback as we provided a great service."
I, naively, had assumed that it was my job to judge the quality of the service. I took a look at their feedback history and found that they always responded to neutral or bad scores by giving a similar score. Customers with few transactions therefore appeared to be a bad risk. Since then, I've always been cautious about interpreting online feedback, so I raised it as a topic for discussion in the Researchers' Group last month.
Everyone at the meeting had learned to exercise similar caution, but in the recent past, not all academics have been so clued up. In 2010, on Amazon, a reviewer nicknamed 'Historian' posted some excellent reviews of recent work by Orlando Figes, whilst dismissing books by rival historians as "dense", "pretentious", "rubbish" and "awful". It later emerged that 'Historian' was none other than Orlando Figes (Itzkoff, 2010).
Meta-evaluations
In the course of our discussion, it was agreed that, where possible, the provenance of a reviewer should be taken into account (not something that can always be done with anonymous peer reviews!). It was also noted however, that comments are often more useful than ratings.
The emergent evaluation of evaluations can be summarised as
1) Is the reviewer trying to be helpful or is s/he (i) just a persistent complainer, or (ii) working to an obvious agenda (eg, historians reviewing their own books and those of rivals)?
2) Are the grounds for complaint relevant to me? (Eg - a restaurant that I'm visiting by bus is heavily criticised for offering inadequate parking).
The first time I was ever subjected to any kind of online review was when I bought something on eBay. The only time I have had feedback that wasn't positive was following my 8th transaction.
The item, when it arrived, looked tacky. Since I had only paid £5 for it, I couldn't complain much. What got me was the cost of postage. The stamp cost around £1.60 and the packaging didn't look that expensive. Since I was charged £15 I figured that the company were paying themselves around £13 for the onerous task of packing and posting. A pretty good hourly rate.
I gave the vendors a 'neutral' rating. They weren't happy. Their feedback score of 99 and a bit percent positive (based on thousands of transactions) was reduced to 99 and a slightly smaller bit positive. They responded by giving me a neutral. Since I only had 8 feedback scores, this one stuck out. Accompanying it was the message: "There was no need for neutral feedback as we provided a great service."
I, naively, had assumed that it was my job to judge the quality of the service. I took a look at their feedback history and found that they always responded to neutral or bad scores by giving a similar score. Customers with few transactions therefore appeared to be a bad risk. Since then, I've always been cautious about interpreting online feedback, so I raised it as a topic for discussion in the Researchers' Group last month.
Everyone at the meeting had learned to exercise similar caution, but in the recent past, not all academics have been so clued up. In 2010, on Amazon, a reviewer nicknamed 'Historian' posted some excellent reviews of recent work by Orlando Figes, whilst dismissing books by rival historians as "dense", "pretentious", "rubbish" and "awful". It later emerged that 'Historian' was none other than Orlando Figes (Itzkoff, 2010).
Meta-evaluations
In the course of our discussion, it was agreed that, where possible, the provenance of a reviewer should be taken into account (not something that can always be done with anonymous peer reviews!). It was also noted however, that comments are often more useful than ratings.
The emergent evaluation of evaluations can be summarised as
1) Is the reviewer trying to be helpful or is s/he (i) just a persistent complainer, or (ii) working to an obvious agenda (eg, historians reviewing their own books and those of rivals)?
2) Are the grounds for complaint relevant to me? (Eg - a restaurant that I'm visiting by bus is heavily criticised for offering inadequate parking).
No comments:
Post a Comment