Skip to main content

David Rand, MIT

David Rand

Understanding and reducing misinformation online

David Rand is the Erwin H. Schell Professor and Professor of Management Science and Brain and Cognitive Sciences at MIT. Bridging the fields of cognitive science, behavioral economics, and social psychology, David’s research combines behavioral experiments and online/field studies with mathematical/computational models to understand human decision-making. His work focuses on illuminating why people believe and share misinformation and “fake news”; understanding political psychology and polarization; and promoting human cooperation. His work has been published in peer-reviewed journals such Nature, Science, PNAS, the American Economic Review, Psychological Science, Management Science, New England Journal of Medicine, and the American Journal of Political Science, and has received widespread media attention. He has also written for popular press outlets including the New York Times, Wired, and New Scientist. He was named to Wired magazine’s Smart List 2012 of “50 people who will change the world,” chosen as a 2012 Pop!Tech Science Fellow, awarded the 2015 Arthur Greer Memorial Prize for Outstanding Scholarly Research, chosen as fact-checking researcher of the year in 2017 by the Poynter Institute’s International Fact-Checking Network, awarded the 2020 FABBS Early Career Impact Award from the Society for Judgment and Decision Making, and selected as a 2021 Best 40-Under-40 Business School Professor by Poets & Quants. Papers he has coauthored have been awarded Best Paper of the Year in Experimental Economics, Social Cognition, and Political Methodology.

Abstract: In recent years, there has been a great deal of concern about the proliferation of false and misleading news on social media. Academics and practitioners alike have asked why people share such misinformation and sought solutions to reduce the sharing of misinformation. In this talk, I will describe work attempting to address both of these questions. First, using survey experiments we find that the veracity of headlines has little effect on sharing intentions, despite having a large effect on judgments of accuracy—that is, accuracy judgments are much more discerning than sharing intentions. This dissociation suggests that sharing does not necessarily indicate belief. Nonetheless, most participants say it is important to share only accurate news. To shed light on this apparent contradiction, we carried out numerous survey experiments (including cross-cultural replications across 16 countries) and a field experiment on Twitter; the results show that subtly shifting attention to accuracy—for example, by asking people to rate the accuracy of a single random headline—increases the quality of news that people subsequently share (see https://bit.ly/3kaTC9l for details). Together with additional computational analyses, these findings indicate that people often share misinformation because their attention is focused on factors other than accuracy—and therefore they fail to implement a strongly held preference for accurate sharing. In a separate experiment, we find that accuracy ratings from fairly small groups of laypeople show high agreement with the ratings of professional fact-checkers, such that the “wisdom of crowds” can be leveraged to identify misinformation at scale (see https://bit.ly/3zf1XgL for details). Together, these findings suggest that occasionally asking users to rate content while online has the double benefit of priming them to think about accuracy (and thus improving the quality of what they share) while also helping to identify problematic content.