Working Paper: CEPR ID: DP18650
Authors: Sergei Guriev; Emeric Henry; Tho Marquis; Ekaterina Zhuravskaya
Abstract: We develop a comprehensive framework to assess policy measures aimed at curbing false news dissemination on social media. A randomized experiment on Twitter during the 2022 U.S. mid-term elections evaluates such policies as priming the awareness of misinformation, fact-checking, confirmation clicks, and prompting careful consideration of content. Priming is the most effective policy in reducing sharing of false news while increasing sharing of true content. A model of sharing decisions, motivated by persuasion, partisan signaling, and reputation concerns, predicts that policies affect sharing through three channels: (i) updating perceived veracity and partisanship of content, (ii) raising the salience of reputation, and (iii) increasing sharing frictions. Structural estimation shows that all policies impact sharing via the salience of reputation and cost of friction. Affecting perceived veracity plays a negligible role as a mechanism in all policies, including fact-checking. The priming intervention performs best in enhancing reputation salience with minimal added friction.
Keywords: social media; antimisinformation policies; twitter; priming; salience; fake news
JEL Codes: P00
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
Salience of reputation (D83) | Decision to share true news (G14) |
Cost of sharing (D16) | Decision to share true news (G14) |
Priming intervention (C92) | Sharing of true tweets (Z13) |
Priming intervention (C92) | Sharing of false tweets (Z13) |
Fact-checking treatment (C90) | Sharing of true tweets (Z13) |
Extra click treatment (Y60) | Sharing of false tweets (Z13) |
Priming treatment (C92) | Sharing of false tweets (Z13) |
Offering fact-check (Y20) | Sharing of false tweets (Z13) |
Ask to assess tweets (C12) | Sharing of false tweets (Z13) |