Monday, 16 January 2017

Academic publishing


This month’s Researcher’s Discussion Group is inspired by Phillip Moriarty’s blog entry on the LSE Impact blog: Addicted to the brand: The hypocrisy of a publishing academic, published on the LSE Impact blog, as found below:

“I’m going to put this as bluntly as I can; it’s been niggling and nagging at me for quite a while and it’s about time I got it off my chest. When it comes to publishing research, I have to come clean: I’m a hypocrite. I spend quite some time railing about the deficiencies in the traditional publishing system, and all the while I’m bolstering that self-same system by my selection of the “appropriate” journals to target.
Despite bemoaning the statistical illiteracy of academia’s reliance on nonsensical metrics like impact factors, and despite regularly venting my spleen during talks at conferences about the too-slow evolution of academic publishing towards a more open and honest system, I nonetheless continue to contribute to the problem. (And I take little comfort in knowing that I’m not alone in this.)
One of those spleen-venting conferences was a fascinating and important event held in Prague back in December, organized by Filip Vostal and Mark Carrigan: “Power, Acceleration, and Metrics in Academic Life”. My presentation, The Power, Perils and Pitfalls of Peer Review in Public – please excuse the Partridgian overkill on the alliteration – largely focused on the question of post-publication peer review (PPPR) via online channels such as PubPeer. I’ve written at length, however, on PPPR previously (here, here, and here) so I’m not going to rehearse and rehash those arguments. I instead want to explain just why I levelled the accusation of hypocrisy and why I am far from confident that we’ll see a meaningful revolution in academic publishing any time soon.
Let’s start with a few ‘axioms’/principles that, while perhaps not being entirely self-evident in each case, could at least be said to represent some sort of consensus among academics:
·         The business model of the traditional academic publishing industry is deeply flawed. While some might argue that George Monbiot – or at least the sub-editor who provided the title for his article on the subject a few years back (“Academic publishers make Murdoch look like a socialist”) – perhaps overstated the problem just a little, it is clear that the profit margins and working practices for many publishers are beyond the pale. (A major contribution to those profit margins is, of course, the indirect and substantial public subsidy, via editing and reviewing, too often provided gratis by the academic community).

·         A journal’s impact factor (JIF) is clearly not a good indicator of the quality of a paper published in that journal. The JIF has been skewered many, many times with some of the more memorable and important critiques coming from Stephen Curry, Dorothy Bishop, David Colquhoun, Jenny Rohn, and, most recently, this illuminating post from Stuart Cantrill. Yet its very strong influence tenaciously persists and pervades academia. I regularly receive CVs from potential postdocs where they ‘helpfully’ highlight the JIF for each of the papers in their list of publications. Indeed, some go so far as to rank their publications on the basis of the JIF.

·         Given that the majority of research is publicly funded, it is important to ensure that open access publication becomes the norm. This one is arguably rather more contentious and there are clear differences in the appreciation of open access (OA) publishing between disciplines, with the arts and humanities arguably being rather less welcoming of OA than the sciences. Nonetheless, the key importance of OA has laudably been recognized by Research Councils UK (RCUK) and all researchers funded by any of the seven UK research councils are mandated to make their papers available via either a green or gold OA route (with the gold OA route, seen by many as a sop to the publishing industry, often being prohibitively expensive).

With these three “axioms” in place, it now seems rather straight-forward to make a decision as to the journal(s) our research group should choose as the appropriate forum for our work. We should put aside any consideration of impact factor and aim to select those journals which eschew the traditional for-(large)-profit publishing model and provide cost-effective open access publication, right?
Indeed, we’re particularly fortunate because there’s an exemplar of open access publishing in our research area: The Beilstein Journal of Nanotechnology. Not only are papers in the Beilstein J. Nanotech free to the reader (and easy to locate and download online), but publishing there is free: no exorbitant gold OA costs nor, indeed, any type of charge to the author(s) for publication. (The Beilstein Foundation has very deep pockets and laudably shoulders all of the costs).
But take a look at our list of publications — although we indeed publish in the Beilstein J. Nanotech., the number of our papers appearing there can be counted on the fingers of (less than) one hand. So, while I espouse the three principles listed above, I hypocritically don’t practice what I preach. What’s my excuse?
In academia, journal brand is everything. I have sat in many committees, read many CVs, and participated in many discussions where candidates for a postdoctoral position, a fellowship, or other roles at various rungs of the academic career ladder have been compared. And very often, the committee members will say something along the lines of “Well, Candidate X has got much better publications than Candidate Y”…without ever having read the papers of either candidate. The judgment of quality is lazily “outsourced” to the brand-name of the journal. If it’s in a Nature journal, it’s obviously of higher quality than something published in one of those, ahem, “lesser” journals.
If, as principal investigator, I were to advise the PhD students and postdocs in the group here at Nottingham that, in line with the three principles above, they should publish all of their work in the Beilstein J. Nanotech., it would be career suicide for them. To hammer this point home, here’s the advice from one referee of a paper we recently submitted:
“I recommend re-submission of the manuscript to the Beilstein Journal of Nanotechnology, where works of similar quality can be found. The work is definitively well below the standards of [Journal Name].”

There is very clearly a well-established hierarchy here. Journal ‘branding’, and, worse, journal impact factor, remain exceptionally important in (falsely) establishing the perceived quality of a piece of research, despite many efforts to counter this perception, including, most notably, DORA. My hypocritical approach to publishing research stems directly from this perception. I know that if I want the researchers in my group to stand a chance of competing with their peers, we have to target “those” journals. The same is true for all the other PIs out there. While we all complain bitterly about the impact factor monkey on our back, we’re locked into the addiction to journal brand.
And it’s very difficult to see how to break the cycle…”



Tuesday, 18 October 2016

How real can VR be?

In Arthur Conan Doyle's story "The Adventure of the Mazarin Stone" (published in 1921), Sherlock Holmes fools some criminals into revealing the whereabouts of a diamond by convincing them that he is playing his violin in the neighbouring room. In fact, he is hiding behind some curtains listening to their conversation while a gramophone record plays the Hoffman 'Barcarole'.

Some years earlier, in 1895, at the Grand Café in Paris, the Lumière brothers presented film of a train arriving at a station.  Legend has it that, as the train loomed large on the screen, the audience panicked and ran away screaming.  Almost certainly though, the accounts are as fictional as Holmes' trickery with the gramophone.

Both technologies record and reproduce aspects of reality.  What would it take though, for a virtual reality to be mistaken for a real reality?  Should there be a VR version of a Turing test?  If a listener were placed outside two booths, one containing a real violinist and the other playing a recording of the violinist, would the listener be fooled?  Could a projection be displayed beside a closed window in such a way that someone in the room could not tell which showed the outside world?

Even if a technology could pass such a test when new, could it continue to do so?  Cutting edge technology quickly becomes blunt. CGI special effects that, 20 years ago, seemed impressive, now seem clumsy.  Arguably, the same question could be asked of the Turing test for artificial intelligence (AI).  If the AI did not learn in the same way as humans, then it may not consistently pass the test.

Monday, 12 September 2016

False information on social media platforms (by Wasim Ahmed)

This month’s discussion is inspired by the panic that was caused at Los Angeles International Airport (LAX) over false claims that there was an active shooter on the premises. Police did not identify a shooter, and the reports derived from the police arresting a man who was wearing a mask and wielding a plastic sword. Only weeks before, at JFK airport, there were reports of a shooting at the airport, which also turned out to be a false alarm. The ‘gunfire’ was in fact Usain Bolt’s cheering fans. 

Both these cases had an element of truth.  At LA airport, those posting to social media genuinely mistook a man wearing a mask for a shooter, and at JFK they mistook cheering for gunfire. However, there are also cases where information on social media is posted with the sole intention of deceiving. During the 2011 London riots for example, several unsubstantiated claims which were spread on Twitter. These included the following:
·         Rioters attack London and release animals
·         Rioters cook their own food in McDonald’s
·         Police beat a 16-year-old girl
·         London Eye set on fire.

During Hurricane Sandy in 2012 certain false tweets were picked up by the mainstream media and reported as fact.  More generally, regular users of social media platforms will encounter highly shared false content on Twitter and Facebook. Some such content may simply be a practical joke.  A recent article, for example, reported that sixty Facebook profiles had been created for non-existent Houston restaurants. Often though, the  misinformation is malicious. Several falses rumour about transgender people have been spreading on Facebook (eg, the rumour that a company was installing urinals in women’s restrooms). 

Public figures are often the subject of dishonest postings. Facebook recently apologized for promoting a false story about Fox News broadcaster Megyn Kelly in their #trending section, According to Craig Silverman (founding editor of Buzzfeed), Facebook’s algorithms contribute significantly to the spreading of such hoaxes.

China takes the issue of false news from social media very seriously, and has recently clamped downA case could be made for a system where users are prosecuted for posting malicious information during disasters; but the issue of more casual false information is difficult to solve.: educational solutions such as educating users and highlighting the importance of basic fact checking would help ease the trend though. Craig Silverman has collated several must-read sources on how to verify information from social media users in real time, and I would highly recommend looking at some of these resources before the discussion group. There is also the Verification Handbook, a guide to verifying digital content for emergency services authored by journalists from the BBC, Storyful, ABC, Digital First Media, and others.


Wednesday, 17 August 2016

Writing badly - the key to academic success?

There are few academic works that I would actually claim to have enjoyed reading.  Michael Billig's book "Learn to write badly: How to succeed in the social sciences" proved to be one of the exceptions.

The iSchool researchers' discussion group has talked about academic writing before.  However, armed with new perspectives from Prof Billig, I thought I would raise the subject again.

Of the various points that Michael Billig made in his book, one was particularly relevant to the iSchool, i.e., the argument that much of writing in the social sciences requires information to be removed. He discusses at length the passivisation that occurs in the process of writing for academic publication.

As someone who has spent many years working in the social sciences, that last sentence came naturally.  It is however, exactly the sort of sentence that Prof Billig criticises.  When I converted a process to a noun (the process of turning active verbs into passive ones), I stripped out a lot of information (about who was doing what, to which verbs) to produce an academic-sounding word (passivisation).  By such means, social scientists bring things into being (Massification, Normatization, plus other ...izations, ...and ...ications),  A social scientist whose new thing is discussed and analysed has the makings of a good career.  However, the people who do the ...izing or ...ify-ing are all too often removed from the discussion.  The focus tends to be on hypothetical processes rather than real people.  Perhaps the social sciences are at risk of being de-societalized.  

Thursday, 21 April 2016

Staying private while searching the Internet (by Alessandro Checco)


One of my interests in online data privacy: how can users access custom content without being tracked individually? Can we break this vicious circle in which advertisers spy on users and, as a reaction, users hide even more?

My idea is to allow a milder form of identification than the classic way in which a use is uniquely identified.  Instead, users would automatically be hidden within a crowd of similar users.

The challenges of this approach is to combat spam and Sybil attacks, but it turns out it can be easily done through cryptographic tokens such as e-cash.

Another topic I am exploring is: how to detect Search Engines ‘learning’ about sensitive topics during a searching session? Interestingly, the advertisements that appear during searches provide evidence of tracking on sensitive topics. Google doesn't seem to track our entire search histories for the purpose of advertising, but just the last 4-5 queries.


Thursday, 31 March 2016

Uses and Risks of Microblogging in Small and Medium-sized Enterprises (by Soureh Latif Shabgahi)

Microblogs, such as Twitter and Yammer, have become very popular for both personal and professional pursuits. Some authors have claimed that social media can radically transform organisations. However, there is a lack of empirical research that evaluates that claim. My thesis investigated the uses and perceptions of risks of microblogging in UK based Small and Medium-sized Enterprises (SMEs).

The research adopted a qualitative methodology because of the intention to explore how participants understand microblogging. Twenty one semi-structured interviews (either face-to-face or on the phone) were conducted with participants in SMEs based in South Yorkshire, UK. A thematic approach was taken to analysing the interview data.

Most of the organisations approached adopted microblogs by a process of trial and error. Smaller organisations did not make much use of the platforms for direct advertising i.e. selling products to others. The participants focused more on other uses. Internally, microblogs were chiefly used by individuals to collaborate remotely with their co-workers and to ask or respond to questions. Externally, microblogging was mainly used to enable users to exchange information, to communicate more with customers and to build relationships with clients. A visual representation was developed to illustrate the uses of microblogging in SMEs. Participants in the study particularly valued microblogging for its limited functionality, its cost effectiveness and because it could be used via mobile phones.

Most participants perceived microblogs to be highly risky, i.e. to expose the organisation and employees to danger. The commonest type of risk was seen to be the danger of damaging the reputation of the business. The majority of participants talked about controlling what types of information should be shared on the platforms and controlling who should engage with microblogging. To illustrate such feelings around risks, two visual representations were developed.

This research is the first in-depth study about the uses of microblogging in UK based SMEs. It was found that microblogging did not radically transform organisations. It was seen as a useful form of communication for SMEs, but no more than that. The limited financial resources and professional expertise that SMEs have, was key to how they adopted the technology. As regards practical implications, something could be done to address the trial and error approach to using microblogs found to be typical of smaller organisations. For example, managers could be given training courses on how to best use microblogging. To improve management of risks, more concrete expert advice could be developed and organisations would benefit from sharing of model policies.

Tuesday, 15 March 2016

Factors that lead to ERP replacement in Higher Education Institutions in Saudi Arabia: A case study (by Arwa Mohammed J Aljohani)

The use of Enterprise Resources Planning (ERP) in Higher Education Institutions (HEIs) has increased substantially over the last few decades. A review of literature relating to Information Systems (IS) and ERPs has confirmed that few research studies have considered ERP in Higher Education: most have focused on their use in business.  In addition, the literature tends to concentrate on issues relating to the adoption of ERP, with a particular emphasis on success stories. Consequently, studies that focus on problems and difficulties associated with the replacement of ERPs, particularly in HEIs, are rare.

Knowledge of the decision making processes associated with ERP replacement is clearly of value to those who have to make the decisions, yet little is known about how and why such decision are made, or about the factors that influence them.  This study aims to fill some of these gaps.  The researcher seeks to investigate the causes and consequences of ERP replacement in a Saudi Arabian HEI.  Data relating to the case study at the heart of this project comes from 17 semi-structured interviews analysed using a Grounded Theory (GT) approach.

The study aspires to make both theoretical and practical contributions to the field.  In particular, it will increase understanding of decision-making processes in HEIs by helping to identify why and when they should consider replacing their ERP systems. A framework is being developed that will help identify factors and issues that should be considered before the decision to replace is made. The study therefore has clear practical value to decision-makers in HEIs and will help to ensure efficient use and exploitation of current systems, and safe adoption of new ones. The research should also be of relevance to system vendors, who have a clear interest in the use of ERP in higher education.