Open to public
Fake news: Why we fall for it and what to do about it
Why do people believe and share misinformation, including entirely fabricated news headlines (“fake news”) and biased or misleading coverage of actual events ("hyper-partisan" content)? The dominant narrative in the media and among academics is that we believe misinformation because we want to – that is, we engage in motivated reasoning, using our cognitive capacities to convince ourselves of the truth of statements that align with our political ideology rather than to undercover the truth. In a series of survey experiments using American participants, we challenge this account. We consistently find that subjects who perform better on the Cognitive Reflection Test (a measure of the tendency to engage in analytic thinking) are better able to identify false or biased headlines - even for headlines that align with individuals’ political ideology (for details, see https://www.sciencedirect.com/science/article/pii/S001002771830163X and http://www.nytimes.com/2019/01/19/opinion/sunday/fake-news.html). We also find that when examining actual Twitter behavior, more reflective individuals share information from higher quality news. These findings suggest that susceptibility to misinformation is driven more by laziness and lack of reasoning than it is by partisan bias or motivated reasoning. We then build on this observation to examine interventions to fight the spread of misinformation. In one, we show that laypeople are much less biased in their evaluation of the trustworthiness of news outlets than one might imagine, and give fake news and hyperpartisan outlets low trust ratings regardless of their political slant. Thus, using crowdsourced ratings of outlet quality to inform social media ranking algorithms is a promising approach (for details, see http://www.pnas.org/cgi/doi/10.1073/pnas.1806781116). Second, we explore the power of making the concept of accuracy top-of-mind, thereby increasing the likelihood that people think about the accuracy of headlines before they decide whether to share them. We then test this intervention using both survey experiments on Amazon Mechanical Turk and field experiments on Twitter. Our results suggest that reasoning is not held hostage by partisan bias, but that instead our participants do have the ability to tell fake or inaccurate from real - if they bother to pay attention. Our findings also suggest simple, cost-effective behavioral interventions to fight the spread of misinformation.