0
(Alternative title (You're a Scurrilous Maggot-like Blind Bald Toothless man who Wants to Start a War With France--Lessons on Avoiding Controversy, which was a reference to this)
The purpose of this report is to analyze how much EAs should prioritize avoiding making enemies and other important lessons from historical movements.
1 Preliminary literature survey
1 It's good for movements to have concrete aims.
2 "Research suggests that many effective social movements combine grassroots participation with support from elites. “Outsiders” who bring time, energy, and commitment to a cause can ally with “insiders,” such as political officials and executives who have political and economic capital and connections.
National Academies of Sciences, Engineering, and Medicine. 2014. Supporting a Movement for Health and Health Equity: Lessons from Social Movements: Workshop Summary. Washington, DC: The National Academies Press. https://doi.org/10.17226/18751."
3 Having a good theme is important. "Equality is an especially persuasive theme, she said, and it usually trumps the theme of personal freedom; according to findings by the Pew Research Center. Polletta noted, that according to the Pew findings, 90 percent of Americans believe that “the government should do everything it can to ensure equality of opportunity”
National Academies of Sciences, Engineering, and Medicine. 2014. Supporting a Movement for Health and Health Equity: Lessons from Social Movements: Workshop Summary. Washington, DC: The National Academies Press. https://doi.org/10.17226/18751."
4 Social movements are best when they rely on multiple framings. EG opposition to the death penalty could attack from multiple directions arguing it's unfair and ineffective.
5 Effective movements have an antagonist. It's hard to mobilize without an enemy. Environmental movements that piss people off have been more effective than ones that don't make anyone angry and just advocate individual action. There have, however, been a few successful movements like ones that combat drunk driving that piss no one off but they're generally limited in scope.
1 Evidence alone is not enough. There's a need for personal salience. Many unsuccessful movements lack salience.
2 Movements need to have diversity (lots of very different people), depth (people in positions of power), and a community.
3 Successful communities require active cultivation, they are not organic.
4 Successful movements harness recent events. BLM used the death of George Floyd for example. EA's could have done this with covid to some degree perhaps and with Ukraine, talking about the presence of existential risks and the importance of reducing existential risks.
5 Movements should work with what they have and do the following.
A) Simplify complex issues.
This could be done with slogans like do the most good we can. However, this risks backfiring by drawings in people who don't find the ideas attractive.
B) Have evocative images that impact people emotionally eg malaria deaths. (This could also backfire).
C) Increase the power of excluded groups.
D) Make change more congruent with the status quo.
These last two don't seem to apply very much to EA.
1 Having a clear purpose is important.
Occupy vs Otpor provide a clear instance of this. Occupy had people go home a few months later, while Otpor toppled Milošević "went on to train activists in the Georgian Rose Revolution, the Ukrainian Orange Revolution and the April 6 Youth Movement in Egypt, just to name a few."
More information on Otpor
2 Values are important. Training activists was done for Otpor, making it successful.
3 Organized small groups are more important than very big groups.
4 It's important to overcome steadily increasing thresholds of resistance and engage with the broader world to avoid insularity. Influencing others is the ultimate way movements make change.
5 Rely on engagement not rhetoric.
Mothers against drunk driving had decent impact, but it wasn't huge. It was pretty widely supported, but many other movements are much bigger. This seems to provide further support for the claim that uncontroversial movements tend not to spread.
Foes without faces are largely ignored.
A) US efforts to fight Bin Laden spread polio, that wouldn't have passed cost benefit analysis, but anger about Bin Laden outweighed prudence.
B) Terrorism is a prime example. We spend much more on terrorism than on traffic accidents, even though traffic accidents kill much more.
C) Evolution favors having enemies and not getting too mad at nature.
D) Humans are addicted to outrage.
E) We see patterns even when there are not any.
Singer was very controversial and one of the most influential public intellectuals. Contrast him to someone like Parfit who was less controversial and made fewer public appearances and is a largely unknown figure.
All of these top public intellectuals were controversial.
7 Top results when you Google lots of successful movements. I am going to make the testable prediction that all of these will have enemies. I have not checked yet, so this is a test of the theory.
A) Floyd protests. Most definitely had enemies.
B) March for science. Yes.
C) Women's march. Yes.
D) Protestant reformation. Yes.
E) Storming of Bastille. Yes.
F) Ghandi's salt march. Yes.
G) Boston tea party. Yes.
H) South Africa's national day of protest. Yes.
I) March on Washington. Yes.
J) Tiananmen square. Yes.
K) Berlin wall protests. Yes.
L) Iraq War protests. Yes.
M) The Orange Revolution. Yes.
So far the pattern is 13 for 13. However, this is just about successful protests which is not indicative of
A) Movements broadly.
and
B) Doesn't look at what percent of protests are controversial.
8 If making some enemies and being controversial was the best way to expand I would expect EA to have been mostly grown by more controversial people like Singer, Scott Alexander, and Yudkowsky and less by people like Ord, MacAskill, and Vox journalists like Klein and Piper. Looking into how people got into EA would be a useful way to test this hypothesis--and more specifically looking at how they got actively involved in the movement.
9 This article talks about EA. Its mention of slatestarcodex is illustrative given that SSC is somewhat controversial, so its enormous interest could provide some support for the hypothesis. However, it is also possible that controversy would divert away from the rest of EA.
Relatedly, this article by Scott points to the articles with the most viewership. They are the more controversial ones. Controversies draw people in and make them interested.
Related ideas are expressed here.
10 Veganism seems like a parallel case. This provides interesting data about that hypothesis. Key findings.
A) Documentaries convinced lots of people. It might be thus worthwhile to make EA related documentaries. That tends to be a way of pushing relatively widely believed messages to a general audience.
B) Having conversations with friends and family also convinced lots of people. Doing that seems worthwhile.
C) Internet videos convinced lots of people.
D) Social media posts also convinced lots of people.
Blogs and books were less effective.
Of people who went vegan in response to video clips, they were often controversial. Gary Yourofsky's speech was the most common, and it is relatively controversial. Graphic footage of animal slaughter was also relatively common.
11 Additional evidence for this includes the new atheists. Popular new atheists like Hitchens, Dawkins, and Harris got much more press than more serious intellectuals like Malpass and Oppy.
12 (Fairly weak piece of evidence) rationality rules is an atheist YouTuber who went from being less intellectual and more polemical to being more polemical. This resulted in his YouTube channel stagnating.
13 Scott Alexander has become less polemical over time and people tend to prefer his earlier writing.
2 Counter-veiling considerations about pissing people off
This thread provides some counter veiling considerations. I will present and respond to them here.
1 EA being controversial might be hard to reverse. Hard to reverse decisions could destroy option value of EA.
This is true, however, the persuasiveness of it will depend on whether one thinks there are significant risks from growing too large. I don't think there are good structural reasons to expect this to be the case--worst case scenario new money just distributes bed nets, which is still a good thing. More interest in the movement would allow it to have more people focusing on directing funds efficiently. At the very least, growing the number of people who give to the malaria consortium and similar organizations seems unambiguously good.
2 EA being controversial could crowd out doing clearly good things. IE if ea becomes associated with Democrats, that could hamper Republicans support for EA. People throughout history who have had good ideas have been hampered by getting involved in politics.
This critique is also reasonable. A few responses.
A) I would be generally opposed to pissing off lots of people. However, having the movements opposition have a human face could promote growth. A few ideas for ways of doing this without pissing off lots of people include having EAs talk lots about how terrible factory farms are (factory farms are pretty unpopular usually), aggressively respond to and debate critics (I've tried to do this here, here, here, here, and here) Scott Alexander has done this here MacAskill has done it here, and talking about politicians failure to take serious actions on existential risk, and criticize ineffective organizations that people tend to agree are bad, like Homeopaths without borders. Additionally, they could write articles criticizing AI risk sceptics, debate people about longtermism the way Christian apologists often do about God, and write popular articles and books about it.
B) Given the previous evidence, becoming controversial could be overall good for movement growth. Currently EA is very small, so if we make lots more people hear about it and gets a decent number of motivated people, even if we alienate some people, that could be worth it.
C) EA could espouse relatively non partisan political aims with clear benchmarks. Examples include advocating government research into AI alignment, concrete actions on Biorisks like banning or heavily regulating gain of function research, reducing the size of and de-alerting some nuclear weapons (especially trying to do so in combination with other countries), talking about failed US policies that people inclined to EA would likely agree failed (e.g. war on terror especially in ways that spread polio, cuts to important foreign aid programs, regulations on factory farms, and regulations on AI), and many others.
3 EA being more controversial could cause reputational harm.
This is true, however, it seems it could also cause great reputational benefit. Hitchens had a much wider impact reputation wise than Oppy, despite Oppy being a much better philosopher, because he was much more controversial and spoke more and better publicly.
4 Being controversial can lock in suboptimal choices.
This is true, however, it can also expand the movement enough to allow pursuit of better choices. More people can fund more research and get people more involved in the movement.
5 Being considerate is very important
Agreed! We should be nice to people. However, we can both be nice to people and be more controversial by doing the things previously discussed.
6 As Gertler says "I feel like the main role of a bulldog is to fend off the fiery, polemical enemies of a movement. Atheism and veganism (and even AI safety, kind of) have clear opponents; I don't think the same is especially true of EA (as a collection of causes)."
Several points.
1 EA can have fiery defenders of areas with critics like animal welfare, AI safety, and others, without having them opine on the entire movement.
2 It's not clear that this is true. Bulldogs may generally arise when there is a controversy, but there's not a necessary relationship between the two. Many new atheist spent lots of firepower on a small contingent of theists.
3 This doesn't detract from the main point about it being good for EA to be controversial, it's mostly just an argument against Bulldogs.
4 There are lots of critics of effective altruism that one can find easily by googling them. Debating these people could be a good way to gain influence.
Another point of comparison could be the online political sphere. The political right rose to prominence in large part by attacking a small contingent of left wingers. The online political left largely grew to prominence through debates. Examples of figures who have rose to prominence through controversy combined with outdebating interlocutors include Ben Shapiro, Destiny, Jordan Peterson, and many others. A similar path could allow EAs to rise to prominence.
There are some worries about growing EA. However, a larger version of EA would be very good for the world. As Scott Alexander says, relying on Sach's figures
"I think this moral lesson is really important – if everyone gave 10% of their income to effective charity, it would be more than enough to end world poverty, cure several major diseases, and start a cultural and scientific renaissance."
Even if EA just pursued global health, a global EA program could have such an enormous positive impact that growing it could have very high EV. EA has done lots of good while being fairly small. Being larger could be much better.
If there were lots of single issue voters on long termism that could have dramatic positive impacts on politics. If more people opposed factory farms, that could similarly combat enormous amounts of suffering.
Bensinger gives lots of reasons to oppose growth of EA, in ways that would perhaps dilute the movement. I'll respond to these here.
""Eternal September" worries: rapid growth makes it harder to filter for fit, makes it harder to bring new members up to speed, makes it harder to for high-engagement EAs to find each other, etc."
This worry seems overblown. More people in the movement provides greater opportunities to bring new people up to speed and to find high-engagement EAs. If only 1% of people who would be high impact EAs if exposed to EA are currently exposed to EA, then this massively hampers the number of high impact EAs. Additionally, it's unclear why being able to find high impact EAs is important. More EAs will increase forum posting, serious engagement through 80,000 hours and such, increase money given, etc. I can't think of a scenario in which this would be a problem. If a person wants to hire researchers, the normal filtering process would prevent unserious EAs from signing up for research.
In terms of maximizing the value of very motivated EAs who make EA a significant part of their life, I'd say overall this counts in favor of movement growth.
"A large movement is harder to "steer". Much of EA's future impact likely depends on our ability to make unusually wise prioritization decisions, and rapidly update and change strategy as we learn more. Fast growth makes it less likely we'll be able to do this, and more likely we'll either lock in our current ideas as "the truth" (lest the movement/community refuse when it comes time for us to change course), or end up drifting toward the wrong ideas as emotional appeal and virality comes to be a larger factor in the community's thinking than detailed argument."
This is a real concern. However, more EA's allow more research into new avenues, so while the ship may be harder to steer, there are more people trying to steer it. More people in the movement would allow more pushback from non EAs and would bring more interested and interesting people with good ideas, making overall better arguments. I'd say that retaining the argumentative nature of EA and preventing lock in is about a wash between not expanding EA and expanding it.
"As EA became less bottlenecked on "things anyone can do" (including donating) and more bottlenecked on rare talents, it became less valuable to do broad "grow the movement" outreach and more valuable to do more targeted outreach to plug specific gaps."
A few points.
1 The marginal value of a dollar given to EA is still pretty high, even if at the margins promoting talented people being pro EA is more important.
2 More people being interested in EA allows more engaged community members to work on things requiring rare talents. One is much more likely to find a Democrat who is uniquely good at understanding the politics of North Carolina than an effective altruist, because there are more democrats than effective altruists. Making there be more effective altruists would help grow the pool of talent.
3 More money allows hiring talented people who would not otherwise be as interested in EA.
Bensinger says next, "This period also overlapped with a shift in EA toward longtermism and x-risk. It's easier to imagine a big nation-wide movement that helps end malaria or factory farming, whereas it's much harder to imagine a big nation-wide movement that does reasonable things about biorisk or x-risk, since those are much more complicated problems requiring more specialized knowledge. So a shift toward x-risk implies less interest in growth."
I don't know if this is true. There were big denuclearization protests. There are lots of easy ways of packaging EA ideas about longtermism in ways that would be good.
Example of a few sentence pitch for EA work on Bioweapons.
"There are dozens of examples of diseases escaping from labs to devastating effect. With new technology, the risk only becomes greater. Lets take concrete actions like restricting risky research called Gain of Function Research and heavily seal biolabs, to prevent the risks that experts worry could kill millions of people in the coming century."
Lots of people in EA don't need to know about the technical problems associated with AI--the pitch that "lots of smart people are worried about AI including most AI researchers and think it could end the world," is pretty compelling.
While perhaps a broader movement would water down the philosophical arguments about caring overwhelmingly about the future, it could increase political influence in ways that would be good for the world. If doesn't take that many people to support AI or Biorisk policies to make it mainstream.
3 Overall takeaways
- Growing EA is plausibly a very good thing.
- Good messaging is also important (it might be worth incorporating equality based messaging when talking about EA--talking about equality of opportunity for people dying of malaria)
- Being controversial but not too controversial is good. Ideally, the people EA should piss off should be people not otherwise inclined to EA (E.g. critics of EA, AI sceptics, and anti EA people).
- Having good EA related documentaries, YouTube videos, and public debates would all be extremely useful functions.
- Making it easy for people to become part of EA communities would be good. Expanding 80,000 hours so that meetings can be much quicker in time, make there be EA meetups in lots of areas, and other things to reduce trivial inconveniences hindering getting involved in EA.
- Good ways of doing this might be saying things that EAs believe that are somewhat controversial (E.g. the long-termist thesis, the factory farms are really bad, and that AI poses an existential risk in big public forums).
- It's also important that there is a group of very motivated EAs. Movements that succeed don't succeed just by having lots of people who are tangentially interested, instead they generally have to have a decent number of very motivated people.
- Podcasts might be a good way to promote EA ideas.
- There is probably too much writing about EA relative to other platforms like videos and movies.
- An EA being interviewed by mainstream media, and defending their ideas well could be quite good for the world, in a way like the way that Jordan Peterson rose to prominence. This is actually a decent analogy because most of what he offers is self help advice, but through controversy he was able to become very prominent. The same may be true of Sam Harris.