susan dumais and qi guo
27 November 2013
The ability of modern web services such as news aggregators and search engines to tailor their results to the tastes of individuals, together with people's preference for reading opinions which reinforce their own viewpoints, have raised concerns that people are nowadays exposed to a narrow range of view-points, a phenomenon referred to as the “filter bubble”. In this paper we focus on increasing exposure to varied political opinions with a goal of improving civil discourse. We develop a method to algorithmically encourage people to read diverse political opinions and test it when people actively seek information. First, analyzing data from a popular search engine we show that people are indeed more likely to read opinions consistent with their own. Interestingly, they are more likely to read news from opposing sites when the language model of a particular news item is close to the language model of their own political leaning. Based on this finding, we describe a method for assisting people to read divergent opinions by choosing documents of opposing viewpoints that have a language model closer to their own language model. We test our method on a number of web searchers and show that pages of the opposing side which were more similar than the average persons' own language model tended to be clicked 38% more than those below. We also describe the long-term effects of our method, showing that people who were shown more diverse results continued reading more diverse results and overall became more interested in news.
|Published in||Social Science Computer Review|