James Özden and Sam Glover at Social Change Lab wrote a literature review on protest outcomes[1] as part of a broader investigation[2] on protest effectiveness. The report covers multiple lines of evidence and addresses many relevant questions, but does not say much about the methodological quality of the research. So that's what I'm going to do today.
I reviewed the evidence on protest outcomes, focusing only on the highest-quality research, to answer two questions:
1. Do protests work?
2. Are Social Change Lab's conclusions consistent with the highest-quality evidence?
Here's what I found:
Do protests work? Highly likely (credence: 90%) in certain contexts, although it's unclear how well the results generalize. [More]
Are Social Change Lab's conclusions consistent with the highest-quality evidence? Yes—the report's core claims are well-supported, although it overstates the strength of some of the evidence. [More]
Cross-posted from my website.
Introduction
This article serves two purposes: First, it analyzes the evidence on protest outcomes. Second, it critically reviews the Social Change Lab literature review.
Social Change Lab is not the only group that has reviewed protest effectiveness. I was able to find four literature reviews:
1. Animal Charity Evaluators (2018), Protest Intervention Report.
2. Orazani et al. (2021), Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis.
3. Social Change Lab – Ozden & Glover (2022), Literature Review: Protest Outcomes.
4. Shuman et al. (2024), When Are Social Protests Effective?
The Animal Charity Evaluators review did not include many studies, and did not cite any natural experiments (only one had been published as of 2018).
Orazani et al. (2021)[3] is a nice meta-analysis—it finds that when you show people news articles about nonviolent protests, they are more likely to express support for the protesters' cause. But what people say in a lab setting mig
If you haven't already come across it, you might find the points given under the "how big of a risk is misalignment" section of Ajeya Cotra's post on cold takes interesting. I would be pretty interested in a more comprehensive list of ways out there that alignment optimists and pessimists tend to disagree about the difficulty of the problem, and what ML experts (outside of AI safety) think about each specific point or if they have other cruxes.