Thursday, 21 November 2013

(Re)opening a discussion about our methodological landscape (by Mary Anne Kennan, Charles Sturt University)


It is uncommon for published research to articulate the meta-theoretical assumptions which underpin different paradigms/approaches/traditions of research. Instead, methodological discussion tends to focus on research methods, techniques and tools. As researchers however, our methodological assumptions determine 
  • our choice of research paradigm; 
  • the formulation of, and the way in which we consider and address our research questions; 
  • and also our selection of methods and techniques.

In this discussion we will consider the pros and cons of adopting a broader view of research methodology.  This includes being specific about meta-theoretical assumptions and their importance for achieving greater reflexive awareness of the “unconscious metaphysics” that underlie and influence how we see and research the world.  To do this we need to make a clear distinction between the concept of methodology as an overall logic of inquiry, and research method as a much narrower concept that defines processes, procedures and techniques that can be used to conduct empirical studies and collect and analyse data (Cecez-Kecmanovic, 2011).  To achieve this clear distinction, it is necessary to make explicit the assumptions and paradigms on which different methodologies are founded.

Methodological issues are discussed therefore, within a broader landscape which includes more than just research methods and techniques, but also addresses what lies behind them.  By discussing the issues in this context, we can ask questions such as:

How does our view of the world and meta-theoretical foundation influence the type of questions we will ask (and answers we will find) as researchers?

How does our world view open (and/or limit) the potential methodological paths that we will choose from?

How does being methodologically explicit help us to select and justify the research methods we choose to answer our questions?

References

Cecez-Kecmanovic, D. & Kennan, M.A. (2013) Chapter 5. The methodological landscape: Information systems and knowledge management in Research Methods: Information, Systems, and Contexts Williamson, K. & Johanson, G. (eds.). Tilde Publishing, Prahran, Victoria. pp113-138 ISBN 978-0-7346-1148-2

Tuesday, 8 October 2013

Giving an oral presentation (by J.E.A. Wallace)

All researchers need to learn to communicate details of their projects to a wider audience.  Obviously, different audiences require different approaches but to try to give some perspective, here are a few hints and tips gleaned from the experiences I’ve had with my own project thus far.

Firstly, and perhaps most importantly: know your audience.   This is not so much about the tone (although over-familiarity is a really bad idea in a formal setting!), but rather about the content of your talk. When presenting to your own group, there is perhaps a level of common knowledge that means that you can assume the basics and only give abbreviated reports on progress.   However, in any other context, it is best to assume that the audience has little knowledge of the specifics of your work. To that end, a brief introduction is useful to set the scene before getting into the details.

Secondly, pacing and timing are key.  A talk divided into clear sections holds the interest of an audience better than a long talk without breaks. It also makes it easier to rehearse timings and to be flexible.   Both are useful in preparing to talk in a formal setting such as a conference.  In such a setting, the Chair will offer some sort of signal near to the end of your slot, or a clock will be made available.  In either case, rehearsing timings beforehand is useful.  Do bear in mind though, that people tend to talk more quickly when nervous.
On the subject of nerves, remember that the audience is there to hear what you have to say. Everyone in the room is on your side so if there are any minor slip-ups, focus on your work and just keep going.  No-one will object if you need to take a moment to gather your thoughts mid-talk.

Lastly, don’t dread question sessions – they are usually a great opportunity to engage with the audience and possibly to establish some beneficial networking connections. From personal experience, these sessions have led to many worthwhile discussions after the fact, and to one worthwhile collaboration.

If anyone has any other experiences or additional suggestions  for presentation, please feel free to contribute in the comments.

Monday, 7 October 2013

Back to basics

Following last month's discussion, James Wallace made the useful observation that, since the bulk of people attending the researchers' group are PhD students, it would be helpful to have occasional talks on some of the basic skills needed for any researcher.  Two subjects he suggested were presenting and posters.

Not only was James forthcoming with his suggestions, he was also prepared to back them up by volunteering his services.  He kindly offered to lead this month's discussion on presenting.  He provided a useful blog entry in advance of the session which I took the liberty of editing a little.  James' original entry asserted that it is important for young researchers to develop the ability to communicate details of their projects.  I edited it to read "All researchers need to learn to communicate details of their projects to a wider audience."





Thursday, 12 September 2013

Filtering the Web (by Simon Wakeling)

The emergence of the internet has allowed its users a previously unimagined access to information. A corollary effect of this information deluge has been the notion of information overload – the difficulty those users face finding, filtering and processing relevant information from amongst the sea of available content. The web services that have emerged to connect us with this information, from search engines to social networks, have in recent years increasingly turned to personalisation as a means of helping users successfully navigate the web. 

In many cases this personalisation takes the form of algorithmic content filtering, with implicit user feedback channels ranging from geography, to click history, to the activity of other users within social networks used to determine the information presented to the user (indeed it has been suggested that Google uses up to 57 different data sources to filter its content). An unintended consequence of this process, which it should be noted is rarely made explicit to the user, is to gradually and invisibly disconnect information seekers from contrary content. A much used example concerns the political liberal, whose regular consumption of left-wing sources and active online network of similarly minded friends eventually sees any conservative material excluded from his search results and news feeds. The result is a “filter bubble” – a term coined by Eli Pariser to describe the self-perpetuating results of the information intermediator’s algorithm. Pariser sees web users inhabiting “parallel but separate universes” in which the absence of the alien undermines objectivity.  

It is interesting to note that philosophers have already begun to engage with this issue. From an epistemological perspective, Thomas Simpson has noted that in an age when search engines are increasingly seen as “expert”, an inbuilt and invisible confirmation bias increases the potential for beliefs to be formed on the basis of “reasons unconnected with the truth”. Others have begun considering the ethical implications. Is the filter bubble a form of censorship? Do we as users have a right to expect objectivity from the systems we use?
 
These are interesting questions for Information Scientists to consider. First though, as scientists we might want to investigate if indeed the phenomena exists, and if it does how it can best be measured and its implications studied. Is it necessarily a bad thing? We might also begin imagining solutions. An example of work already undertaken in this area is Balancer, which presents a stick man leaning under the weight of the “liberal” or “conservative” news sources the user has accessed.  It’s certainly novel, despite its limitations and acknowledged imperfections. Perhaps though the significance of the Filter Bubble problem requires more revolutionary solutions. We just need to work out what they are…

Monday, 9 September 2013

Decision making for microbes

A long, long time ago, in a life far far away, I spent several years working as a researcher in the life sciences.  When I shifed into information sciences, I found myself looking at aspects of the subject from a biologist's perspective.  

One of the aspects that I found myself considering was that of decision making.  Living things benefit if they can make best use of the resources that surround them.  To do so, they need information.  Or so I suggested in a paper that I wrote ten years ago (Madden, 2004) in which I argued that when organisms evolved the ability to move (and possibly before), they evolved a need for information.  

As is always the case at the discussion meetings, the talk went in some interesting directions.  Unsurprisingly I was pulled up for my lax use of the term "World View".  I had quoted Checkland's assertion that "Judging from their behaviour, all beavers, all cuckoos, all bees have the same W[orld View], whereas man has available a range of W[orld View]s." (Checkland, 1984, p218).  It was, I conceded in the discussion, a questionable assertion that I had not questioned.


Checkland, P.B. (1984). Systems Thinking, Systems Practice 2nd Edn. Chichester:

Wiley.
Madden, A. D. (2004). Evolution and information. Journal of Documentation,60(1), 9-23.

Sunday, 21 July 2013

Peer review, failure and the benefits of persistence

Earlier today I found myself reflecting on our recent discussion on peer review after I came across the reviews that accompanied the rejection of a paper I submitted a while back (see below).  The second review, though somewhat scathing, at least offered a few pointers to what the reviewer thought was wrong. The first reviewer obviously did not suffer from self doubt.  Her/his comments were wholly unhelpful.

The paper was later published (albeit in a less prestigious journal) and was recently nominated for an outstanding paper award.

It could well be argued that I had submitted to the wrong journal.  Clearly the audience of The Electronic Library was more receptive than the reviewers whose comments are published below.  However, it would be helpful if reviewers could be reminded of the value of supporting their statements with evidence, and discouraged from presenting opinions as facts.  Referee 2 asserted that "The presentation and argumentation is poor by any standards".  This was a rather arrogant extrapolation.  The reviewer's standards were presented as indicative of all standards.  Such practice is common amongst journalists, but should, I feel, be avoided by academics, even under the cloak of anonymity.  

Fortunately for me and my co-authors, Referee 2's belief that her/his standards were universal proved not to be the case.

Referee 1
This is very weak in many ways: the related work is unacceptably shallow, the experiment itself may be fatally flawed (I cannot tell from the current description but it feels that way) and the results are trivially presented. Even if they fixed the above I think the design and their current approach to investigating this issue may be problematic for publication; we already know a lot about relevance criteria and evaluative judgments and I doubt their approach is going to give much that is new. This almost assumes that nothing has been done in this area and their experiment feels very naïve. The only (relatively) new aspect is the meta-cognitive approach but they need to do something sensible with this rather than treating it as a buzzword. 

Referee 2
I recommend that this paper not be accepted by the journal. The paper has many serious problems that need to be addressed and lead the reviewer to reject the paper.

The presentation and argumentation is poor by any standards, and well below what would be expected of a journal like [name of journal].The paper needs a good abstract that provides a good summary of the paper. The current abstract is poorly written, and gives little insights into the key findings of the study. Most readers will only read the abstract and not read the complete paper unless the abstract is good.

The introduction section is weak: the paper is not clear as to the research problem, and why it is important and significant. The paper has no clear theoretical framework, nor an adequate literature review. Research questions not clearly spelled out.

The research design section of the paper is poor for an empirical study. This section is inadequate and does not reach the level that another researcher could replicate the study. Needs a data collection section and a data analysis section.

The Results section is unclear and not comprehensible. 

The paper has no Discussion section that discusses the key findings in relation to previous studies. Where are the limitations and implications?

Much of the Conclusions section needs to be in an earlier section. No further research is discussed.

The references section is messy, not in alphabetical order and has missing references.

Overall, the quality of this paper is poor in structure and content. The paper lacks a theoretical framework, is a poor empirical paper and is embarrassingly weak for a paper submitted to [name of journal]  I recommend that the paper be rejected.

Tuesday, 9 July 2013

Introducing KNIME (by Edmund Duesbury)

This blog entry could revolutionise the way you do information-based research!

KNIME (pronounced [naɪm/] and rhymes with "time") represents an exciting step forward in programming and the development of scientific tools for information sciences.  KNIME is a freely available, open source workflow tool which allows users, in a few clicks, to develop "programs" which can 
  • generate statistics from given input data,  
  • perform visual analyses and
  • generate graphs and web-based reports.

KNIME has built into it a wide variety of program libraries. Elements of these can be dragged and dropped into a workspace, enabling users to create a program in mere minutes.  These are not only useful, they look good as well!  Because KNIME makes use of visual representations of program components, it's incredibly easy to use.  There is no need to mess around with UML or pseudocode, and planning of workflows becomes straightforward.

In this session, James Wallace and I will try to introduce you to KNIME, and give an example of how to obtain simple statistics from some input data.  We’ll also demonstrate a classification method for plants which anyone can implement and which is easy to understand. Please feel free to come along, ask questions, and have a play with KNIME.

More information, including download links, can be found at: http://www.knime.org/knime