Longer title for this question: To what extent does misinformation/disinformation (or the rise of deepfakes) pose a problem? And to what extent is it tractable?
- Are there good analyses of the scope of this problem? If not, does anyone want to do a shallow exploration?
- Are there promising interventions (e.g.certificates of some kind) that could be effective (in the important sense)?
Context and possibly relevant links:
- Deepfakes: A Grounded Threat Assessment - Center for Security and Emerging Technology (I’ve only skimmed the beginning of this paper — would really appreciate a partial summary or an epistemic spot check of some kind)
- Misinformation (EA Forum Wiki)
- The "misinformation problem" seems like misinformation
- Deepfake video of Zelenskyy could be 'tip of the iceberg' in info war, experts warn
- Nina Schick on disinformation and the rise of synthetic media - 80,000 Hours
- Fake news in India - Wikipedia
- The spread of misinformation: A pattern we see over and over | Statistical Modeling, Causal Inference, and Social Science
I’m posting this because I’m genuinely curious, and feel like I lack a lot of context on this. I haven't done any relevant research myself.
One speculative, semi-vague, and perhaps hedgehoggy point that I've often come back to when thinking about this:
I think it's quite possible that many people have a set of beliefs/assumptions about democracies which cause them to grossly (albeit perhaps not ultimately) underestimate the threat of mis- and dis-information in democracies: in conversations and research presentations I've listened to, I've frequently heard people frame the issue of audiences believing misinformation/disinformation as such audiences making some mistake or irrational choice. This certainly makes sense when it comes to conspiracy theories that tell you to do personally-harmful things like not getting any vaccines or foolishly investing all of your money in some bubble. However, I feel that people in these conversations/presentations will occasionally confuse epistemic rationality (i.e., wanting to have accurate beliefs) and instrumental rationality (i.e., wanting to do--including believe--whatever maximizes one's own interests): sometimes having inaccurate beliefs is more personally beneficial than having accurate beliefs, especially for social or psychological reasons.
This stands out most strongly when it comes to democracies and voting: unlike your personal medical and financial choices, your voting behavior has effectively no "ostensible personal impact" (i.e., on who gets elected and subsequently what policies are put into place which affect you). Given this, lines of reasoning such as "voters are incentivized to have accurate beliefs because if they believe crazy things they're more likely to support policies that harm themselves" are flawed.
In reality, rather than framing the question by simply asking "why do voters have these irrational beliefs / why are they making these mistakes", I think it's important to also ask "Why would we even expect these voters to have accurate beliefs in the first place?"
Ultimately, I have more nuanced views on the potential health and future of democracy, but I think that disinformation/misinformation strikes at one of the core weak points of democracy: [setting aside the non-democratic features of democracies (e.g., non- or semi-democratic institutions within government)] democracies manage to function largely because voters are 1) delusional about the impact of their voting choices, and/or 2) motivated by psychological and social reasons--including some norms like "I shouldn't believe crazy things"--to make somewhat reasonable decisions. Mis- and dis-information, however, seem to undermine these norms.