Tuesday, 19 January 2016

Is that ethical? Exploring ethical challenges facing social media researchers (by Aimi Ladesco)


This blog explores some of the challenges, issues and grey areas that can arise when researching user-generated content (eg, social media and web forums).

It is important to distinguish between situations that relate to the researcher’s role as an individual, and to those that relate to her role as an associate of her employing institution.

As an individual, the researcher may take a utilitarian view and argue that there is value in capturing and analysing user-generated content at an aggregate level, in order to monitor trends associated with events of public interest, such as infectious disease outbreaks.  Such an analysis could clearly be of value and, as the utilitarian philosopher Jeremy Bentham might have observed, would produce more utility than bad.  If, by contrast, the researcher took an ethical stance such as that of Immanuel Kant, which was based on duty rather than utility (deontological), she may object to the use of such content without informed consent.  Not to do so, she might argue, could lead to all kinds of personal data being accessed on the grounds that doing so could be for the greater good.

As an associate of an employing institution, there are additional ethical considerations. The researcher will, for example, be expected to help her employer fulfil legal duties in regards to the data protection act and the safeguarding of participants.  She will also be, to some extent, morally obliged to protect the image of her employer. 

The ethical policies that arise from such considerations can sometimes delay research or halt potential collaborations. One of the values of social media research is that it allows the capture and rapid analysis of data relating to emerging news stories.  

Research institutions often have different ethical polices, with some being stricter than others possibly because, like the individual researcher, the ethics of different institutions are differently affected by utilitarian and deontological considerations.  The loss of opportunity to analyse and react to change caused by some of the stricter policies may, ironically, be a cause of harm as well as a means of preventing it.

There are instances where a researcher in a crisis situation may, through appropriate use of social media data (such as Twitter), be able to map locations of particular concern and offer refuge points in a crisis situation: the work of digital humanitarians, such as Patrick Meier provides a good example. The Standby Task-Force is a global network of volunteers who assist crisis-affected communities. However, initiatives such as this may be stifled by some of the ethical policies associated with research involving human participants.

This leads to another distinct, but related concern: should such ethics policies apply to research conducted in a researcher’s own time, using her own equipment, rather than research carried out in accord with her role as employee of a research establishment? For example certain voluntary activities (such as the Standby Task-Force) may be classed as research.  If they are classed in this way and are, according to the employing institution, deemed unethical, who should carry out such potentially life-saving activities?

Resources for further reading:

Sunday, 17 January 2016

Santa Claus: The truth! (And the usefulness)

It's a nice feeling to get a paper accepted.  Then comes the crunch moment when you realize that you said something silly and it's now preserved in print.

In 2014, I had a paper published in J.Doc about the evolution of information.  In it I argued that:
"the beliefs of any culture lead to practices that can be fitted into one of three categories. There will be some that are useful for all people for all time; some that were useful for some people at some time; and some that were never useful for anyone at any time."

I was being cautious.  Originally I had meant to write that beliefs were true for all time, for some time or for no time, but was daunted by the philosophical baggage associated with the word truth so I chose instead, to refer to usefulness.  That was a big mistake.  Last month's discussion was an example of why.

The topic was: "When did you stop believing in Santa?  Why?  If you still believe in Santa, please come prepared to present evidence.  If your  culture is a Santa-free zone, who or what is the equivalent in your culture?"

Not surprisingly, none of those who attended believed in Santa.  Sadly, we didn't have anyone from another culture who was prepared to nominate a Santa equivalent.  What emerged from discussion though, was the fact that Santa Claus is a very creepy individual.  An old man who spends 364 days of the year monitoring the behaviour of children and who is capable of sneaking unseen into their bedrooms at night would, in most other circumstances, be an object of fear rather than affection.  As it turned out, we weren't the first people to have that thought, and Santa Claus has featured in at least one horror film.

However, Santa Claus is an example of where utility and truth diverge.  His myth is, I'm fairly certain, one that few adults have ever believed.  However, like many myths without truth, it is useful. A point that was made by more than one person at the discussion was the role that Santa Claus plays in coercing excitable children to go to bed quietly on Christmas Eve.  He is a metaphysical protection racket: Behave - Or else!

Santa Claus is an example of why, in all probability, there have been no beliefs that have been useful for nobody at any time.  Even ones that are clearly and demonstrably untrue can be put to use by someone.