The
emergence of the internet has allowed its users a previously unimagined access
to information. A corollary effect of this information deluge has been the
notion of information overload – the difficulty those users face finding,
filtering and processing relevant information from amongst the sea of available
content. The web services that have emerged to connect us with this
information, from search engines to social networks, have in recent years
increasingly turned to personalisation as a means of helping users successfully
navigate the web.
In many cases this personalisation takes the form of algorithmic content filtering, with implicit user feedback channels ranging from geography, to click history, to the activity of other users within social networks used to determine the information presented to the user (indeed it has been suggested that Google uses up to 57 different data sources to filter its content). An unintended consequence of this process, which it should be noted is rarely made explicit to the user, is to gradually and invisibly disconnect information seekers from contrary content. A much used example concerns the political liberal, whose regular consumption of left-wing sources and active online network of similarly minded friends eventually sees any conservative material excluded from his search results and news feeds. The result is a “filter bubble” – a term coined by Eli Pariser to describe the self-perpetuating results of the information intermediator’s algorithm. Pariser sees web users inhabiting “parallel but separate universes” in which the absence of the alien undermines objectivity.
It is interesting to note that philosophers have already begun to engage with this issue. From an epistemological perspective, Thomas Simpson has noted that in an age when search engines are increasingly seen as “expert”, an inbuilt and invisible confirmation bias increases the potential for beliefs to be formed on the basis of “reasons unconnected with the truth”. Others have begun considering the ethical implications. Is the filter bubble a form of censorship? Do we as users have a right to expect objectivity from the systems we use?
These are interesting questions for Information Scientists to consider. First though, as scientists we might want to investigate if indeed the phenomena exists, and if it does how it can best be measured and its implications studied. Is it necessarily a bad thing? We might also begin imagining solutions. An example of work already undertaken in this area is Balancer, which presents a stick man leaning under the weight of the “liberal” or “conservative” news sources the user has accessed. It’s certainly novel, despite its limitations and acknowledged imperfections. Perhaps though the significance of the Filter Bubble problem requires more revolutionary solutions. We just need to work out what they are…
In many cases this personalisation takes the form of algorithmic content filtering, with implicit user feedback channels ranging from geography, to click history, to the activity of other users within social networks used to determine the information presented to the user (indeed it has been suggested that Google uses up to 57 different data sources to filter its content). An unintended consequence of this process, which it should be noted is rarely made explicit to the user, is to gradually and invisibly disconnect information seekers from contrary content. A much used example concerns the political liberal, whose regular consumption of left-wing sources and active online network of similarly minded friends eventually sees any conservative material excluded from his search results and news feeds. The result is a “filter bubble” – a term coined by Eli Pariser to describe the self-perpetuating results of the information intermediator’s algorithm. Pariser sees web users inhabiting “parallel but separate universes” in which the absence of the alien undermines objectivity.
It is interesting to note that philosophers have already begun to engage with this issue. From an epistemological perspective, Thomas Simpson has noted that in an age when search engines are increasingly seen as “expert”, an inbuilt and invisible confirmation bias increases the potential for beliefs to be formed on the basis of “reasons unconnected with the truth”. Others have begun considering the ethical implications. Is the filter bubble a form of censorship? Do we as users have a right to expect objectivity from the systems we use?
These are interesting questions for Information Scientists to consider. First though, as scientists we might want to investigate if indeed the phenomena exists, and if it does how it can best be measured and its implications studied. Is it necessarily a bad thing? We might also begin imagining solutions. An example of work already undertaken in this area is Balancer, which presents a stick man leaning under the weight of the “liberal” or “conservative” news sources the user has accessed. It’s certainly novel, despite its limitations and acknowledged imperfections. Perhaps though the significance of the Filter Bubble problem requires more revolutionary solutions. We just need to work out what they are…