Tuesday, 19 January 2016

Is that ethical? Exploring ethical challenges facing social media researchers (by Aimi Ladesco)


This blog explores some of the challenges, issues and grey areas that can arise when researching user-generated content (eg, social media and web forums).

It is important to distinguish between situations that relate to the researcher’s role as an individual, and to those that relate to her role as an associate of her employing institution.

As an individual, the researcher may take a utilitarian view and argue that there is value in capturing and analysing user-generated content at an aggregate level, in order to monitor trends associated with events of public interest, such as infectious disease outbreaks.  Such an analysis could clearly be of value and, as the utilitarian philosopher Jeremy Bentham might have observed, would produce more utility than bad.  If, by contrast, the researcher took an ethical stance such as that of Immanuel Kant, which was based on duty rather than utility (deontological), she may object to the use of such content without informed consent.  Not to do so, she might argue, could lead to all kinds of personal data being accessed on the grounds that doing so could be for the greater good.

As an associate of an employing institution, there are additional ethical considerations. The researcher will, for example, be expected to help her employer fulfil legal duties in regards to the data protection act and the safeguarding of participants.  She will also be, to some extent, morally obliged to protect the image of her employer. 

The ethical policies that arise from such considerations can sometimes delay research or halt potential collaborations. One of the values of social media research is that it allows the capture and rapid analysis of data relating to emerging news stories.  

Research institutions often have different ethical polices, with some being stricter than others possibly because, like the individual researcher, the ethics of different institutions are differently affected by utilitarian and deontological considerations.  The loss of opportunity to analyse and react to change caused by some of the stricter policies may, ironically, be a cause of harm as well as a means of preventing it.

There are instances where a researcher in a crisis situation may, through appropriate use of social media data (such as Twitter), be able to map locations of particular concern and offer refuge points in a crisis situation: the work of digital humanitarians, such as Patrick Meier provides a good example. The Standby Task-Force is a global network of volunteers who assist crisis-affected communities. However, initiatives such as this may be stifled by some of the ethical policies associated with research involving human participants.

This leads to another distinct, but related concern: should such ethics policies apply to research conducted in a researcher’s own time, using her own equipment, rather than research carried out in accord with her role as employee of a research establishment? For example certain voluntary activities (such as the Standby Task-Force) may be classed as research.  If they are classed in this way and are, according to the employing institution, deemed unethical, who should carry out such potentially life-saving activities?

Resources for further reading:

Sunday, 17 January 2016

Santa Claus: The truth! (And the usefulness)

It's a nice feeling to get a paper accepted.  Then comes the crunch moment when you realize that you said something silly and it's now preserved in print.

In 2014, I had a paper published in J.Doc about the evolution of information.  In it I argued that:
"the beliefs of any culture lead to practices that can be fitted into one of three categories. There will be some that are useful for all people for all time; some that were useful for some people at some time; and some that were never useful for anyone at any time."

I was being cautious.  Originally I had meant to write that beliefs were true for all time, for some time or for no time, but was daunted by the philosophical baggage associated with the word truth so I chose instead, to refer to usefulness.  That was a big mistake.  Last month's discussion was an example of why.

The topic was: "When did you stop believing in Santa?  Why?  If you still believe in Santa, please come prepared to present evidence.  If your  culture is a Santa-free zone, who or what is the equivalent in your culture?"

Not surprisingly, none of those who attended believed in Santa.  Sadly, we didn't have anyone from another culture who was prepared to nominate a Santa equivalent.  What emerged from discussion though, was the fact that Santa Claus is a very creepy individual.  An old man who spends 364 days of the year monitoring the behaviour of children and who is capable of sneaking unseen into their bedrooms at night would, in most other circumstances, be an object of fear rather than affection.  As it turned out, we weren't the first people to have that thought, and Santa Claus has featured in at least one horror film.

However, Santa Claus is an example of where utility and truth diverge.  His myth is, I'm fairly certain, one that few adults have ever believed.  However, like many myths without truth, it is useful. A point that was made by more than one person at the discussion was the role that Santa Claus plays in coercing excitable children to go to bed quietly on Christmas Eve.  He is a metaphysical protection racket: Behave - Or else!

Santa Claus is an example of why, in all probability, there have been no beliefs that have been useful for nobody at any time.  Even ones that are clearly and demonstrably untrue can be put to use by someone.

Tuesday, 15 December 2015

Research hits and misses

Come prepared to nominate the author of that paper that has shaped your thinking and helped to focus your research.  Or - for the more negatively inclined - come and name the author who everyone cites and you cannot understand why.”

That was the topic for November’s discussion group.  Those who suggested authors tended to nominate positive influences, though a few did identify some people whose work was a source of frustration.

Generally, the authors considered helpful were thought to be so because they clearly described approaches or techniques which were helpful to those recommending them.  Wasim, for example, referred to the research of Gunther Eisenbach, whose content analysis of Tweets during the 2009 swine flu outbreak has shaped his own research.  Similarly, Marc’s work has relied on the research of Richard Suinn, who, in 1972, developed the first maths anxiety questionnaire.  

Other people recommended authors who helped them to see things in a new light.  James Wallace referred to the work of Stephen Roughley, which gives an insight into how chemists actually do their research.  Not surprisingly, rather than beginning their explorations from scratch every time, they keep referring to a few familiar reactions which act as a starting point, and work from there.  Roughley describes these reactions as the researchers’ toolboxes and argues that there is generally little incentive to expend the time and effort required to set off in wholly unfamiliar directions.  Such behaviour is familiar to information scientists from work on information foraging.  Paula dicussed the writings of Lev Manovich, who, in his writings on new media theory, argues that the Internet is killing culture by decontextualizing ideas.

Authors whose work is a source of frustration included the ubiquitous, the presumptive and the lucky.  Authors who had the ability to turn anything, however trivial, into a publication, were criticised.  James mentioned one author for example, who latches onto whatever is current in organic chemistry and manages to recycles core experiments in numerous publications, without actually saying much new.

Other authors who caused annoyance did so by reducing complex concepts to simple measurements and glossing over any assumptions that were made in the process.

Lucky authors were those whose work was poor, but who were first in their field and therefore were widely cited.

Chew, C., & Eysenbach, G. (2010). Pandemics in the age of Twitter: content analysis of Tweets during the 2009 H1N1 outbreak. PloS One, 5(11), e14118.

Jordan, A. M., & Roughley, S. D. (2009). Drug discovery chemistry: a primer for the non-specialist. Drug Discovery Today, 14(15), 731-744.

Roughley, S. D., & Jordan, A. M. (2011). The medicinal chemist's toolbox: an analysis of reactions used in the pursuit of drug candidates. Journal of Medicinal Chemistry, 54(10), 3451-79.

Richardson, F. C., & Suinn, R. M. (1972). The Mathematics Anxiety Rating Scale: Psychometric data. Journal of Counseling Psychology, 19(6), 551-554.

Friday, 2 October 2015

Using Twitter data to provide insights into health conditions and health-related events (by Wasim Ahmed)

My research examines social media data, such as data derived from Twitter, to provide insights into health conditions and health related events.

Twitter has 316 million monthly active users and there are 500 million tweets per day.  It can be used as a source of data for social science research both current and historical in and of itself, but it can also be used to complement more traditional data sources, such as surveys and interviews.  I lead the New Social Media New Social Science Network (NSMNSS) Twitter account, which has  members from across academia and industry who explore the methodological implications of social media research.

One of my case studies focuses on the Ebola outbreak of 2014, where I have amassed at least 26 million tweets. Examining tweets allows the real-time monitoring of public views and opinions. These can be monitored by people from the health sector who can then disseminate accurate information appropriately.  In some instances, data derived from Twitter allows geographical surveillance, and has the potential to be used to identify locations of possible infectious disease outbreaks.  Twitter has proved useful in emergency and crisis situations.

There are often specific methodological, ethical, privacy, and copyright issues which require careful consideration, and my PhD research also critically considers these.  I am also aiming to identify and evaluate the software that can be used by social scientists or those from the health sector, to analyse Twitter. This is very important, as it allows non-computer scientists or non-programmers to retrieve Twitter data in order to ask social science research questions.

Since the start of my PhD I have been disseminating thoughts and findings.  I am an active tweeter, and my research blog has proven to be very popular.  Some posts have appeared in Google Scholar and others have been picked up by the mainstream media.  My research has been mentioned on the British Medical Journal’s (BMJ) blog, ihawkes blog, DiscoverText blogs (for a historical data prize).  I receive regular invitations to academic and industry events and have recorded an audio lecture for a group of Masters students at Western Sydney University on how to retrieve data from Twitter, and on the methodological implications of social media research.

Tuesday, 18 August 2015

What is Chemical Similarity, and how is it Useful? (by Edmund Duesbury)


In the final year of my PhD, I have been investigating different forms of alignment of chemicals, and seeing which method is best at predicting whether a chemical will be active against a particular drugtarget.

Similarity is subjective to a particular problem domain.  As an example, which two most objects are most similar – an apple, a pumpkin or a basketball?  All three are more or less spherical, but the pumpkin and apple have the similarity of being fruit, while the pumpkin and basketball are a similar size.

The same subjectivity exists in chemistry.  A common goal when searching for similarity in chemicals is to predict whether one compound will act in the same way as another compound, known to have useful pharmaceutical properties.  The desired “similarity” in this case, is a similarity of biological activity: something which, at present, is impossible to predict.  However, we can attempt to infer such a property from aspects of structural similarity.

Serotonin reuptake inhibitors is a group of chemicals that includes many useful antidepressants. Consider the examples below, of compounds that act as serotonin reuptake inhibitors.  In the first case (Figure 1), similarity is based on the largest common fragment (highlighted in bold).
Figure 1

The similarity here is obvious, the only difference being the Br atom.  However, the same technique fails to show the biological similarity between the two inhibitors in Figure 2.


Figure 2.

In this case, the approach of finding the largest fragment has failed to highlight the “similarity” between the two compounds.  A technique based on finding the maximum possible overlap of edges however, is more successful (Figure 3).



















Figure 3.

This method, which seeks to find a set of common fragments emphasises a different “similarity” between these compounds.

Wednesday, 12 August 2015

Perish by peer review

Many thanks to Sheila Webber for her entry on this blog regarding the recent Times Higher Article (THE) on peer review.  The article is well worth reading and makes several valid points.

However, a distinction should be made between peer review and anonymous peer review.  Peer review (both formally and informally) is a widespread practice in many professions (including academia).  Anonymous peer review used to be the preserve of academics, but (thanks to Web 2.0) is now a feature of 21st century life.  Anyone who has selected a restaurant on TripAdvisor is likely to be reading an anonymous review by a peer from the dining community.

One key difference is that a bad review on TripAdvisor can be countered by the business owner, and is not likely (on its own) to bring down the business.  The nascent careers of academics are more fragile.

Another key difference is that, on TripAdvisor, I can visit the restaurant and read the reviews, then assess how representative the review is of my interests and my tastes.  In other words, how much of a peer was the reviewer?

When academic peer review began, it was not anonymous.  At some time after WWII, arrangements by which journal editors informally contacted academics for advice on submitted articles appear to have been formalised in the system of anonymous peer review.  Prior to that, the interests, characters and prejudices of reviewers would have been known, resulting in greater openness and (occasionally) unrestrained unpleasantness.  However, the reviewer's credentials could be assessed and (if necessary) questioned.

One contributor to the THE article notes that, in his discipline alone (economics), there are 20,000 new journal articles every year.  The pool of reviewers must therefore be very large, prompting the question: to what extent is a reviewer a peer of the author?

The THE article ends with a contribution from an anonymous author who is is establishing a website for particularly bad examples of anonymous peer review.  I would certainly applaud such an exercise but I hope that, as well as giving the opportunity to read the poor reviews, the site also publishes the articles that attracted them.  It will be interesting to see how many suffer from poor writing, poor research and poor analysis, and how many suffer from being unorthodox and innovative.

Friday, 7 August 2015

Peer review or perish

A couple of years ago, Andrew wrote an interesting piece on peer review, which you can find here http://sheffieldischoolresearchers.blogspot.co.uk/2013/07/peer-review-failure-and-benefits-of.html Published yesterday in the Times Higher is a piece in which 6 academics (mostly full professors, I note) quote the worst peer reviews they ever got, and give their opinion on whether peer review should be jettisoned. The reviewer comment “What is this muck?” is one of the most arresting (in response to a paper by Prof Susan Bassnett). The last contributor chooses to remain anonymous, but is calling for examples of bad peer review to put on a website...
Times Higher Education. (2015, 6 August). The worst piece of peer review I’ve ever received. Times Higher Education. https://www.timeshighereducation.co.uk/features/the-worst-piece-of-peer-review-ive-ever-received?nopaging=1