Ash Catton

PhD Candidate, Clinical Psychology Student

How can social networking sites be used to influence the voting intentions of the public?


May 17, 2018

(Psychology)
Until recently, studies on the influence of news media on voter intention have relied on conventional platforms such as television, newspapers, and news websites. The concept of ‘selective exposure’ was coined to refer to the deliberate and voluntary act of choosing what articles to pay attention to, and the subsequent biases that arise in the process (Hart et al., 2009). There is no clear consensus in the literature on which biases are implicated in the automatic deliberative process, and tend to explain the phenomenon in terms of either confirmation bias, or the perceived utility (or usefulness) of the information (Knobloch-Westerwick & Kleinman, 2011). However, some papers have suggested that context may determine which of these arise (Garrett, 2009). Despite the disagreement over which biases are involved, it is clear that selective exposure to media such as television news, is mediated by some degree of anticipated agreement with either the substance or delivery of the content (Iyengar & Hahn, 2009). However, the introduction of online social networking has presented an alternative to conventional media platforms.

Social Networking Sites (SNS) are defined in Boyd and Ellison (2007) as web-based services that allow users to create a publicly visible profile, maintain and illustrate a network of connections with other users, and to facilitate communication within those connections. Since their inception, SNS have evolved beyond a service of communication among peers and have become a mainstream platform of news content, with one early study of Twitter activity noting that over 85% of ‘trending’ or frequently discussed content referred to headline news topics (Kwak, Lee, Park, & Moon, 2010). The presence of both professional and amateur organisations and journalists on SNS fora means that the end user can select the account they wish to follow and have the content delivered to them on their device. The transition of SNS into a news medium has resulted in at least two new challenges to the regular consumer of news: Echo chambers, and misinformation.

The nature of SNS allows the user to select only those accounts that provide content in agreement with their beliefs and values, and political orientation is one factor that appears to influence the terrain of a user’s SNS landscape. This has been described as ‘political homophily’ or the association with others on the basis of shared political ideology (Boutyline & Willer, 2017). For instance, one study was able to measure political homophily on Twitter based on the followers of the accounts of official US political parties and successfully predict the political orientation of those users (Colleoni, Rozza, & Arvidsson, 2014). The capacity for the user of SNS to follow only those accounts that cater to their beliefs and values leads to the ‘echo chamber’ effect due to the fact that they would be “mainly listening to louder echoes of their own voices” (Sunstein, 2007). This effect is limited only to political issues or issues that get discussed from ideological perspectives such as school shootings, as it has been noted that discussion of non-political matters crosses ideological boundaries (Barbera, Jost, Nagler, Tucker, & Bonneau, 2015).

People who are incubated in an echo chamber have been the unwitting subjects of targeted political marketing campaigns. This is because the SNS profile of a user can be used as a valuable source of accurate data. For instance, Youyou, Kosinski, and Stillwell (2015) was able to extract users Facebook data to provide a more accurate judgement of a user’s personality than that users own peers.  The online presence of a large section of the voting public has resulted in SNS being used as a major campaign tool. In the 2016 US Presidential election a marketing firm, Cambridge Analytica, used SNS-generated data to create specific voter-targeting models to appeal to 13.5 million potential Donald Trump voters that were yet to be reached by conventional means (Persily, 2017). The method that Cambridge Analytica used was to feed information that appealed to the existing intuition of the voters and “…give it a little push…” by selectively presenting advertisements on SNS that had the visual format of a news article (Channel 4, 2018). Where previously, the consumer of news media selectively exposed themselves to material predicted to be agreeable, Cambridge Analytica selectively exposed millions of SNS users to material that had been carefully formulated to be agreeable to the targeted SNS user. Thus taking the choice of exposure away from the consumer.

Information that goes ‘viral’ or spreads rapidly across social networks via SNS does not have to be truthful. In 2014, the World Economic Forum stated that the spread of misinformation online was one of the most pressing concerns at the time (World Economic Forum, 2013) and this issue has continued to the present time and is known as ‘fake news’. Fake news is defined as the deliberate publication of fictitious information, including hoaxes and propaganda, on SNS fora (Douglas, Ang, & Deravi, 2017). This has led to the widespread acceptance of conspiracy theories online, including one during the 2016 US Presidential Election that generated over 1 million tweets (#Pizzagate) about the Democratic Party operating a paedophile ring underneath a pizza restaurant in Washington DC (Breiner, 2016). The Twitter accounts that are involved in the discussion of such controversial matters are not necessarily run by concerned citizens. Such information is both initiated and repeated by automated accounts on SNS or ‘bots’ which were seen to be highly active in these roles during the 2016 election (Kollanyi, Howard, & Woolley, 2016) with one paper noting that those same accounts were active in discussions during the 2017 French Presidential elections (Ferrara, 2017) suggesting that they are part of some wider network of voter influence.

More generally, fake news can be used to discredit the legitimacy of mainstream issues that have taken on the status as a political matter by careful use of the framing effect. This is when a statement is carefully packaged to elicit a particular emotion (Kahneman, 2011). For instance, a popular story that was discussed on SNS platforms in 2016 was about a petition signed by many thousands of scientists declaring global warming a non-issue, despite the particular petition being signed by 31,000 people with a minimum of a Bachelor’s degree in science, rather than being employed as an environmental scientist (Stephan Lewandowsky, Ecker, & Cook, 2017). This careful framing would appeal to the confirmation bias of a potential voter who wants to reinforce their existing belief that climate change is a hoax.

This is particularly concerning because much research has shown that believing in conspiracy theories inoculates the subject against truthful information. For instance, Einstein and Glick (2014) reports that subjects exposed to a conspiracy claim about the government has a significant and negative effect on trust in all governmental services, including those unrelated to the claim. Further, the research in S. Lewandowsky, Ecker, Seifert, Schwarz, and Cook (2012) resulted in the authors coining the term ‘continued influence effect’ to describe the ongoing reliance of information by subjects who know that information to be false. They attribute several factors to the persistence of these false beliefs which include social factors, which by extension would include SNS platforms, thus further reinforcing the echo chamber effect. The authors also suggest that breaking free of the effect requires a high degree of attention, and thus deliberate cognitive effort. This is reiterated in Sterling, Jost, and Pennycook (2016) who suggest that susceptibility to ‘bullshit’ claims depends on an over-reliance on heuristic processing tendencies and the under-use of effortful cognitive capacities. This is perhaps why, in response to #Pizzagate, one man believed the story credible enough to go into the pizza restaurant with a firearm to investigate the premises for himself, rather than using more effortful cognitive processes, such as analysing the credibility of the story (Kafka, 2016).

It is too early to determine the degree to which these influences result in a changed vote from one candidate or party to another, or whether it merely adds certainty to a decision that has already been made. However, it has been shown that an echo chamber reinforced by confirmation bias with fake news tends to result in a shift towards extremism (Sunstein, 2009). This shift towards extremism may already be underway, with the World Economic Forum’s 2017 Global Risk Report identifying increasing social and political polarisation as one of the top five global risks (World Economic Forum, 2017). Furthermore, one study has highlighted that, in the 2016 US Election, Donald Trump had more positive sentiment on Twitter in the leadup to the election than his opponent and thus Twitter proved a more reliable predictor of the outcome of that election than many mainstream news platforms (Yaqub, Chun, Atluri, & Vaidya, 2017). While this is not sufficient to consider Twitter a source of accurate polling, it is evident that SNS platforms are used as a tool to both measure and manipulate voter intention.

References

Barbera, P., Jost, J. T., Nagler, J., Tucker, J. A., & Bonneau, R. (2015). Tweeting From Left to Right: Is Online Political Communication More Than an Echo Chamber? Psychol Sci, 26(10), 1531-1542. doi:10.1177/0956797615594620
Boutyline, A., & Willer, R. (2017). The Social Structure of Political Echo Chambers: Variation in Ideological Homophily in Online Networks. Political Psychology, 38(3), 551-569. doi:10.1111/pops.12337
Boyd, D. M., & Ellison, N. B. (2007). Social Network Sites: Definition, History, and Scholarship. Journal of Computer-Mediated Communication, 13(1), 210-230. doi:10.1111/j.1083-6101.2007.00393.x
Breiner, A. (2016). Pizzagate, explained: Everything you want to know about the Comet Ping Pong pizzeria conspiracy theory but are too afraid to search for on Reddit. Retrieved from https://www.salon.com/2016/12/10/pizzagate-explained-everything-you-want-to-know-about-the-comet-ping-pong-pizzeria-conspiracy-theory-but-are-too-afraid-to-search-for-on-reddit/
Channel 4. (2018). Exposed: Undercover secrets of Trump’s data firm. Retrieved from https://www.channel4.com/news/exposed-undercover-secrets-of-donald-trump-data-firm-cambridge-analytica
Colleoni, E., Rozza, A., & Arvidsson, A. (2014). Echo Chamber or Public Sphere? Predicting Political Orientation and Measuring Political Homophily in Twitter Using Big Data. Journal of Communication, 64(2), 317-332. doi:10.1111/jcom.12084
Douglas, K., Ang, C. S., & Deravi, F. Y. (2017). Reclaiming the truth. The Psychologist, 30, 36-42.
Einstein, K. L., & Glick, D. M. (2014). Do I Think BLS Data are BS? The Consequences of Conspiracy Theories. Political Behavior, 37(3), 679-701. doi:10.1007/s11109-014-9287-z
Ferrara, E. (2017). Disinformation and social bot operations in the run up to the 2017 French presidential election. First Monday, 22.
Garrett, R. K. (2009). Echo chambers online?: Politically motivated selective exposure among Internet news users. Journal of Computer-Mediated Communication, 14(2), 265-285. doi:10.1111/j.1083-6101.2009.01440.x
Hart, W., Albarracin, D., Eagly, A. H., Brechan, I., Lindberg, M. J., & Merrill, L. (2009). Feeling validated versus being correct: a meta-analysis of selective exposure to information. Psychol Bull, 135(4), 555-588. doi:10.1037/a0015701
Iyengar, S., & Hahn, K. S. (2009). Red Media, Blue Media: Evidence of Ideological Selectivity in Media Use. Journal of Communication, 59(1), 19-39. doi:10.1111/j.1460-2466.2008.01402.x
Kafka, P. (2016). An astonishing number of people believe Pizzagate, the Facebook-fueled Clinton sex ring conspiracy story, could be true. recode. Retrieved from https://www.recode.net/2016/12/9/13898328/pizzagate-poll-trump-voters-clinton-facebook-fake-news
Kahneman, D. (2011). Thinking, Fast and Slow. United Kingdom: Penguin
Knobloch-Westerwick, S., & Kleinman, S. B. (2011). Preelection Selective Exposure. Communication Research, 39(2), 170-193. doi:10.1177/0093650211400597
Kollanyi, B., Howard, P. N., & Woolley, S. C. (2016). Bots and Automation over Twitter during the Third U.S. Presidential Debate.  Retrieved from http://www.politicalbots.org/
Kwak, H., Lee, C., Park, H., & Moon, S. (2010). What is Twitter, a social network or a news media? Paper presented at the Proceedings of the 19th international conference on World wide web - WWW '10, Raleigh, North Carolina, USA.
Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and Its Correction: Continued Influence and Successful Debiasing. Psychol Sci Public Interest, 13(3), 106-131. doi:10.1177/1529100612451018
Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond Misinformation: Understanding and Coping with the “Post-Truth” Era. Journal of Applied Research in Memory and Cognition, 6(4), 353-369. doi:10.1016/j.jarmac.2017.07.008
Persily, N. (2017). Can Democracy Survive the Internet? Journal of Democracy, 28(2), 63-76. doi:10.1353/jod.2017.0025
Sterling, J., Jost, J. T., & Pennycook, G. (2016). Are neoliberals more susceptible to bullshit? Judgment and Decision Making, 11(4), 352-360.
Sunstein, C. R. (2007). Republic.com 2.0. Princeton NJ: Princeton University Press.
Sunstein, C. R. (2009). Going to Extremes: How Like Minds Unite and Divide. New York: Oxford University Press.
World Economic Forum. (2013). Outlook on the Global Agenda 2014. Retrieved from http://www3.weforum.org/docs/WEF_GAC_GlobalAgendaOutlook_2014.pdf
World Economic Forum. (2017). The Global Risks Report 2017. Retrieved from http://www3.weforum.org/docs/GRR17_Report_web.pdf
Yaqub, U., Chun, S. A., Atluri, V., & Vaidya, J. (2017). Analysis of political discourse on twitter in the context of the 2016 US presidential elections. Government Information Quarterly, 34(4), 613-626. doi:10.1016/j.giq.2017.11.001

Youyou, W., Kosinski, M., & Stillwell, D. (2015). Computer-based personality judgments are more accurate than those made by humans. Proc Natl Acad Sci U S A, 112(4), 1036-1040. doi:10.1073/pnas.1418680112

Share


Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in