If all mankind minus one, were of one opinion, and only one person were of contrary opinion, mankind would be no more justified in silencing that one person, than he, if he had the power, would be justified in silencing mankind.
  • John Stuart Mill, On Liberty, p23
We strive to base our actions on the best available evidence and reasoning about how the world works. We recognise how difficult it is to know how to do the most good, and therefore try to avoid overconfidence, to seek out informed critiques of our own views, to be open to unusual ideas, and to take alternative points of view seriously. ...
We are a community united by our commitment to these principles, not to a specific cause. Our goal is to do as much good as we can, and we evaluate ways to do that without committing ourselves at the outset to any particular cause. We are open to focusing our efforts on any group of beneficiaries, and to using any reasonable methods to help them. If good arguments or evidence show that our current plans are not the best way of helping, we will change our beliefs and actions.


This post argues that Cancel Culture is a significant danger to the potential of EA project, discusses the mistakes that were made by EA Munich and CEA in their deplatforming of Robin Hanson, and provides advice on how to avoid such issues in the future.

As ever, I encourage you to use the navigation pane to jump to the parts of the article that are most relevant to you. In particular, if you are already convinced you might skip the 'examples' and 'quotes' sections.


The Nature of Cancel Culture

In the past couple of years, there’s been much damage done to the norms around free speech and inquiry, in substantial parts due to what’s often called cancel culture. Of relevance to the EA community is that there have been an increasing number of highly public threats and attacks on scientists and public intellectuals, where researchers are harassed online, disinvited from conferences, had their papers retracted, and fired, because of mass online mobs reacting to an accusation over slight wording on topics of race, gender, and other issues of identity, or guilt-by-association with other people who have also been attacked by such mobs.

This is colloquially called ‘cancelling’, after the hashtags that have formed saying #CancelX or #xisoverparty, where X is some person, company or other entity, hashtags which are commonly trending on Twitter.

While such mobs cannot attack every person who speaks in public, they can attack any person who speaks in public, leading to chilling effects where nobody wants to talk about the topics that can lead to cancelling.

Cancel Culture essentially involves the following steps:

  1. A victim, often a researcher, says or does something that irks someone online.
  2. This critic then harshly criticises the person using attacks that are hard to respond to in our culture - the accusation of racism is a common one. The goal of this attack is to signal to a larger mob that they should pile on, with the hope of causing massive damage to the person’s private and professional lives.
  3. Many more people then join in the attack online, including (often) contacting their employer.
  4. People who defend the victim are attacked as also being guilty of a similar crime.
  5. Seeing this dynamic, many associates of the victim prefer to sever their relationship, rather than be subject to this abuse. This may also include their employer, for whom the loss of one employee seems a relatively small cost for maintaining PR.
  6. The online crowd may swiftly move on; however, the victim now lives under a cloud of suspicion that is hard to displace and can permanently damage their career and social life.
  7. Other researchers, observing this phenomenom, choose to remain silent on issues they think may draw the attention of such cancel mobs.

It’s certainly the case that such a pattern of behaviour existed before, but the issue seems to have become significantly worse in recent years.


There have been many examples of this form of abuse in recent months. Below I’ve included quotes illustrating a few, but I encourage the interested reader to research more themself. If you’re already aware of these cases, especially since the first one is already so prominent in our community, feel free to skip to the section titled ‘Cancel Culture is Harmful for EA’.

One disadvantage of these examples is they show only the tip of the iceberg. They can show us the cases where someone was forced into a humiliating apology, or fired from their job, but they cannot show us the massively greater cost of everyone who self-censored out of fear. Those who write on this subject invariably seem to receive a slew of grateful communications from academics who were too afraid to speak out themselves.

Scott Alexander

Scott Alexander is one of the most skilled commentators of our age, with a gift for insightful and always generous commentary, as well as a close ally of the EA movement. He has written hugely influential posts on a wide range of topics, including identifying Motte and Bailey arguments, Moloch, reason, the bizarre world of IRBs, why debates focus on the most worst possible cases, hierarchies of intellectual contrarianism, cost disease, and he was early to the replication crisis. And of course, the Biodeterminists Guide to Parenting.

One offshoot of this was the Culture-War threads on his associated subreddit, designed to segregate Culture War type discussions from the other comment sections on his blog. While he didn’t directly participate very much, and largely handed off moderation to others, it not only achieved its primary goal (keeping Culture War out of most of his comment sections), but also produced some very valuable discussion:

Thanks to a great founding population, some very hard-working moderators, and a unique rule-set that emphasized trying to understand and convince rather than yell and shame, the Culture War thread became something special. People from all sorts of political positions, from the most boring centrists to the craziest extremists, had some weirdly good discussions and came up with some really deep insights into what the heck is going on in some of society’s most explosive controversies. For three years, if you wanted to read about the socialist case for vs. against open borders, the weird politics of Washington state carbon taxes, the medieval Rule of St. Benedict compared and contrasted with modern codes of conduct, the growing world of evangelical Christian feminism, Banfield’s neoconservative perspective on class, Baudrillard’s Marxist perspective on consumerism, or just how #MeToo has led to sex parties with consent enforcers dressed as unicorns, the r/SSC culture war thread was the place to be. I also benefited from its weekly roundup of interesting social science studies and arch-moderator baj2235’s semi-regular Quality Contributions Catch-Up Thread.

The users of these threads, as with the rest of his blog and the wider EA ecosystem, skewed left wing (as is shown by the multiple extensive surveys done of his readers, with 1000s of users filling them out annually). Despite the ground truth, to some people it felt right-wing:

I acknowledge many people’s lived experience that the thread felt right-wing; my working theory is that most of the people I talk to about this kind of thing are Bay Area liberals for whom the thread was their first/only exposure to a space with any substantial right-wing presence at all, which must have made it feel scarily conservative. This may also be a question of who sorted by top, who sorted by new, and who sorted by controversial. In any case, you can just read the last few threads and form your own opinion.

Open discussion of controversial topics naturally leads to some controversial opinions. Naturally, these are the ones your opponents choose to highlight, so soon they run the risk of dominating your reputation - or at least, your reputation among people who aren’t ‘woke’ to the dangers of cancel culture:

It doesn’t matter if taboo material makes up 1% of your comment section; it will inevitably make up 100% of what people hear about your comment section and then of what people think is in your comment section. Finally, it will make up 100% of what people associate with you and your brand. The Chinese Robber Fallacy is a harsh master; all you need is a tiny number of cringeworthy comments, and your political enemies, power-hungry opportunists, and 4channers just in it for the lulz can convince everyone that your entire brand is about being pro-pedophile, catering to the pedophilia demographic, and providing a platform for pedophile supporters. And if you ban the pedophiles, they’ll do the same thing for the next-most-offensive opinion in your comments, and then the next-most-offensive, until you’ve censored everything except “Our benevolent leadership really is doing a great job today, aren’t they?” and the comment section becomes a mockery of its original goal.

This leads to a narrative that his blog was somehow ‘alt-right’:

People settled on a narrative. The Culture War thread was made up entirely of homophobic transphobic alt-right neo-Nazis. … [I]t was always that the the thread was “dominated by” or “only had” or “was an echo chamber for” homophobic transphobic alt-right neo-Nazis, which always grew into the claim that the subreddit was dominated by homophobic etc neo-Nazis, which always grew into the claim that the SSC community was dominated by homophobic etc neo-Nazis, which always grew into the claim that I personally was a homophobic etc neo-Nazi of them all.

Despite this being clearly false:

I freely admit there were people who were against homosexuality in the thread (according to my survey, 13%), people who opposed using trans people’s preferred pronouns (according to my survey, 9%), people who identified as alt-right (7%), and a single person who identified as a neo-Nazi (who as far as I know never posted about it). … I am a pro-gay Jew who has dated trans people and votes pretty much straight Democrat. I lost distant family in the Holocaust. You can imagine how much fun this was for me.

This lead to his being subject to vicious abuse:

Some people found my real name and started posting it on Twitter. Some people made entire accounts devoted to doxxing me in Twitter discussions whenever an opportunity came up. A few people just messaged me letting me know they knew my real name and reminding me that they could do this if they wanted to.

A common strategy is to try to poison one’s relationships with real-life friends:

Some people started messaging my real-life friends, telling them to stop being friends with me because I supported racists and sexists and Nazis. Somebody posted a monetary reward for information that could be used to discredit me.

And to get someone fired:

One person called the clinic where I worked, pretended to be a patient, and tried to get me fired.

In this case, it didn’t end up with his being fired. ‘All’ that happened was his suffering a nervous breakdown and closing down one of the most popular parts of his site (though it was somewhat reborn under new leadership elsewhere).

The one positive element of this sorry story is Scott, a devotee to truth till the end, wrote up the story as a cautionary tale:

Fifth, if someone speaks up against the increasing climate of fear and harassment or the decline of free speech, they get hit with an omnidirectional salvo of “You continue to speak just fine, and people are listening to you, so obviously the climate of fear can’t be too bad, people can’t be harassing you too much, and you’re probably just lying to get attention.” But if someone is too afraid to speak up, or nobody listens to them, then the issue never gets brought up, and mission accomplished for the people creating the climate of fear. The only way to escape the double-bind is for someone to speak up and admit “Hey, I personally am a giant coward who is silencing himself out of fear in this specific way right now, but only after this message”. This is not a particularly noble role, but it’s one I’m well-positioned to play here, and I think it’s worth the awkwardness to provide at least one example that doesn’t fit the double-bind pattern.

David Shor

David Shor was a political scientist working for a left-wing political consultancy, which analysed data to try to help Democrat politicians win elections in the US. On May 28th he tweeted a link to an academic paper that argued that while non-violent protests pushed voters to support the Democrats, violent protests pushed them towards the Republicans, saying

Post-MLK-assasination race riots reduced Democratic vote share in surrounding counties by 2%, which was enough to tip the 1968 election to Nixon. Non-violent protests *increase* Dem vote, mainly by encouraging warm elite discourse and media coverage.

This swiftly led to many heavily critical and aggressive tweets. To illustrate a typical such exchange I will quote one, Trujillo Wesler, at length:

Yo. Minimizing black grief and rage to "bad campaign tactic for the Democrats" is bullshit most days, but this week is absolutely cruel.
This take is tone deaf, removes responsibility for depressed turnout from the 68 Party and reeks of anti-blackness.

Shor earnestly replied:

The mechanism for the paper isn’t turnout, it’s violence driving news coverage that makes people vote for Republicans. The author does a great job explaining his research here: <link>

Trujillo Wesler replied:

Do you think I didn't read the paper and know what I was talking about when calling out your callousness?
I think Omar's analysis is sloppy and underwhelming, but that's not the point.
YOU need to stop using your anxiety and "intellect" as a vehicle for anti-blackness

… before then tagging the CEO of Shor’s company:

@danrwagner Come get your boy.

The next day Shor apologised, and then a few days later he signed a non-disclosure agreement with the company and was fired solely as a result of the tweet.

This story has received a lot of press at the time; see for example this article for more details.

Steven Hsu

Steven Hsu is a physics professor at Michigan State University, where he has worked on a wide variety of projects, including advanced genetics work: his team developed novel techniques to predict adult height very accurately from DNA, as well as a variety of illnesses. Noteworthy for EAs, he cofounded Genomic Prediction, the first company (to my knowledge) offering consumer embryo selection - a technology whose potential has been of great interest to EAs.

As well as a tenured professorship, he also held an administrative role as Senior Vice President. In June the student union started to agitate for his firing:

The concerns expressed by the Graduate Employees Union ... and other individuals familiar with Hsu indicates an individual that cannot uphold our University Mission or our commitment to Diversity, Equity, and Inclusion. Given this discordance with university values, Stephen Hsu should not be privileged with the power and responsibility of recruiting and funding scholars, overseeing ethical conduct, or coordinating graduate study.
By signing this open letter we ask MSU to follow through to its commitment to be a diverse and inclusive institution and to change its institutional and administrative practices so that the passion and talent of Black scholars, Indigenous scholars, and other scholars of color (BIPOC) can be recognized and fostered within these university halls.

A second letter advocating for his dismissal came out, which among other things highlighted his work on embryo selection:

Hsu also appears to be dabbling in eugenics through his beliefs that embryos may be selected on the basis of genetic intelligence.

One might have thought that academic freedom would permit a professor who studied genetics to hold such views, but that was not the case:

Not only do these views ignore the copious social science research on social determinants of intelligence and accomplishments, therefore rendering them suspect in a scholarly sense, it is also deeply disturbing that someone whose role is to allocate funding and provide authoritative input in decisions regarding promotion and tenure cases for faculty in a diverse institution should hold such beliefs.

In a twitter thread they argued that his scientific work was bad because they claimed that if true it would have undesirable political consequences:

Hsu has also entertained and hosted views arguing that racial underperformance in colleges is related to *lack of segregation* in education and flaws in multiculturality, undercutting the basis of Brown v. Board of education.

Similarly, he was accused of supporting the use of standardised tests to measure cognitive ability:

Hsu is against removing standardized tests like the GRE & SAT because he believes they measure cognitive ability & that lack of Black & Hispanic representation in higher ed reflects lower ability, despite evidence these tests negatively impact diversity.

For brevity's sake I shan’t quote everything they accused him of, but one common thread is suggesting that his scientific views, or at least caricatures of them, were wrong because, if true, they would contradict the (extreme) political views of the authors, and that because of this he could not be trusted to direct university resources.

A counter-protest letter occurred, with a large number of very prominent signatories, arguing that his professional conduct was flawless and that he had been badly mis-represented:

The charges of racism and sexism against Dr. Hsu are unequivocally false and the purported evidence supporting these charges ranges from innuendo and rumor to outright lies. (See attached letters for details.) We highlight that there is zero concrete evidence that Hsu has performed his duties as VP in an unfair or biased manner. Therefore, removing Hsu from his post as VP would be to capitulate to rumor and character assassination.

Alas, this was not enough, as the president of his university soon asked him to resign from the role:

President Stanley asked me this afternoon for my resignation. I do not agree with his decision, as serious issues of Academic Freedom and Freedom of Inquiry are at stake. I fear for the reputation of Michigan State University.
However, as I serve at the pleasure of the President, I have agreed to resign. I look forward to rejoining the ranks of the faculty here.

Emmanuel Cafferty

Emmanuel was an ordinary utility worker in Southern California, who was tricked into making an ‘ok’ sign as he drove home one day:

At the end of a long shift mapping underground utility lines, he was on his way home, his left hand casually hanging out the window of the white pickup truck issued to him by the San Diego Gas & Electric company. When he came to a halt at a traffic light, another driver flipped him off. … He flashed what looked to Cafferty like an “okay” hand gesture and started cussing him out. When the light turned green, Cafferty drove off, hoping to put an end to the disconcerting encounter.
But when Cafferty reached another red light, the man, now holding a cellphone camera, was there again. “Do it! Do it!” he shouted. Unsure what to do, Cafferty copied the gesture the other driver kept making. The man appeared to take a video, or perhaps a photo.

Unfortunately, this is now considered by some to be a white supremacist sign (though there is no evidence he was aware of this fact):

Two hours later, Cafferty got a call from his supervisor, who told him that somebody had seen Cafferty making a white-supremacist hand gesture, and had posted photographic evidence on Twitter. (Likely unbeknownst to most Americans, the alt-right has appropriated a version of the “okay” symbol for their own purposes because it looks like the initials for “white power”; this is the symbol the man accused Cafferty of making when his hand was dangling out of his truck.)

Despite the fact he is 75% latin american by ancestry, after a series of people called into his employer to demand he was fired, his employer duly caved:

Dozens of people were now calling the company to demand Cafferty’s dismissal … By the end of the call, Cafferty had been suspended without pay. By the end of the day, his colleagues had come by his house to pick up the company truck. By the following Monday, he was out of a job.

More details available in many places including here.

James Bennett

After widespread rioting, on June 3rd the NYT published an editorial by an influential Republican Senator arguing that the military should be used to restore order:

The pace of looting and disorder may fluctuate from night to night, but it’s past time to support local law enforcement with federal authority.

He was careful to distinguish between peaceful protesters and violent rioters:

[T]he rioting has nothing to do with George Floyd, whose bereaved relatives have condemned violence. On the contrary, nihilist criminals are simply out for loot and the thrill of destruction, with cadres of left-wing radicals like antifa infiltrating protest marches to exploit Floyd’s death for their own anarchic purposes.

And that the majority of voters agreed that this was a good idea:

Not surprisingly, public opinion is on the side of law enforcement and law and order, not insurrectionists. According to a recent poll, 58 percent of registered voters, including nearly half of Democrats and 37 percent of African-Americans, would support cities’ calling in the military to “address protests and demonstrations” that are in “response to the death of George Floyd.”

Whether or not one agrees with the opinion, this seems squarely within the realm of typical op-ed pieces. However, many NYT employees objected, in private and in public, tweeting things like:

A parade of Times journalists tweeted a screen shot showing the headline of Cotton's piece, "Send In the Troops," with the accompanying words: "Running this puts Black @NYTimes staff in danger."

This language is very clever, because complaints about workplace safety enjoy special legal protections that they would not if they made a merely political objection, however implausible the safety claim is.

Initially the editor in charge, James Bennett, defended the editorial:

Times Opinion owes it to our readers to show them counter-arguments, particularly those made by people in a position to set policy.

Shortly afterwards, he was forced to resign, and a replaced with a new editor who made clear that such offensive content would not be tolerated.

More details here and here.

Several months later the NYT published another editorial by a senior Hong Kong politician defending their controversial new security law. In most objective ways it is far more objectionable than Cotton’s piece - the law is extremely draconian, including criminalising conduct all over the world, but to my knowledge no editor has been made to resign.

Greg Patton

Greg Patton is a professor of business communication at USC. As an expert in Mandarin, he used a Chinese example to illustrate the role of words like ‘um’ and ‘err’:

He also tries to mix in culturally diverse examples. When he talks about the importance of pausing, for instance, he notes that other languages have equivalent filler words. Because he taught in the university’s Shanghai program for years, his go-to example is taken from Mandarin: nèige (那个). It literally means “that,” but it’s also widely used in the same way as um.

Unfortunately, when spoken out loud this sounds similar to a slur in US English. As a result a group of students complained to the administrators:

[A] group of students sent an email to business-school administrators saying they were “very displeased” with the professor. They accused Patton of “negligence and disregard” and deemed the Mandarin example “grave and inappropriate.” They referenced the killings of George Floyd and Breonna Taylor. “Our mental health has been affected,” they wrote. “It is an uneasy feeling allowing him to have power over our grades. We would rather not take this course than to endure the emotional exhaustion of carrying on with an instructor that disregards cultural diversity and sensitivities and by extension creates an unwelcome environment for us Black students.” The email is signed “Black MBA Candidates c/o 2022.”

Despite counter-complaints by Chinese alumni, who felt that their language was being insulted,

As the story made its way into the Chinese news media, and onto the social network Weibo, it was met with disbelief and anger. A letter signed by more than 100 mostly Chinese alumni of the business school avers that the “spurious charge has the additional feature of casting insult toward the Chinese language.”

The administrator responsible issued an apology for Greg’s conduct:

Dean Garrett emailed the M.B.A. Class of 2022 to let them know that another professor would take over. It was, he wrote, “simply unacceptable for faculty to use words in class that can marginalize, hurt and harm the psychological safety of our students.” He went on to say that he was “deeply saddened by this disturbing episode that has caused such anguish and trauma,” but that “[w]hat happened cannot be undone.”

Greg also apologised:

Patton wrote a 1,000-word email to the Marshall Graduate Student Association in which he offered a “deep apology for the discomfort and pain that I have caused members of our community.”

Nonetheless, Greg was made to step down from teaching the course.

More details here and here.

Cancel Culture is Harmful for EA

On many subjects EAs rightfully attempt to adopt a nuanced opinion, carefully and neutrally comparing the pros and cons, and only in the conclusion adopting a tentative, highly hedged, extremely provisional stance. Alas, this is not such a subject.

The rise of cancel culture is a threat to honest intellectual inquiry - a core part of the EA project. The silencing effect - whereby seeing some poor soul being destroyed makes other people keep quiet in self-preservation - intimidates people from exploring new and controversial areas. Yet it has been a consistent trend in EA thought that exactly this process of intellectual groundbreaking is vital to the EA project. EA arose out of a dissatisfaction with the existing state of affairs: dissatisfaction with people’s unwillingness to share with those in need, dissatisfaction with the poor epistemic standards. EA was born out of powerful critiques of these things - critiques which were, and still are, highly controversial.

I think it is easy for newcomers now, joining a movement that should get a lot of credit for professionalising over time, to not realise quite how chaotic things were in the early days.

Earning To Give, an idea which, while less central to EA than it once was, is still a key part, was extremely controversial, especially when pitched in the early days as “Who is more moral: Doctors or Bankers?” If 80k had given in to the very offended people we would have lost an important part of the movement - and if we had abandoned the progenitors of the concept as ‘too controversial’ we would have lost individuals who are now highly respected leaders of the movement.

Similarly the early GiveWell was no stranger to saying highly controversial and offensive things. They proudly criticised existing charities, even though many people argued that doing so would deter donors from their entire sector, preventing innocent lives from being saved. I think everyone reading this would agree it is good that we have stuck with them!

This drive to fearlessly explore the unknown is even more important when we move outside of global health and into Longtermism. It is only after many many years that we have finally figured out a respectable sounding method of pitching many of the longtermist ideas - a situation we would not be in if we had shunned the earlier more controversial versions. It is natural that someone, discovering a new vista of intellectual possibilities for the first time, should take a strong stance. Only by doing so can they fully explore, and only by doing so can they properly show to others why this is a fertile region for their own energies. Later, more cautious thinkers can refine the early work of these pioneers and make it more legible to the mainstream. It is easy now for us to read early Xrisk writings and cringe, but Rome was not built in a day and could not be built if we had shunned the founders for their many controversies.

Similarly, the field of animal welfare, another core EA concern, is rife with controversies. To many animal rights activists, factory farming is literally the worst thing in the history of the world. Comparisons with the holocaust jump to their minds - both because of the nature of the activities and also because of the colossal scope of the harm. Yet to the ordinary person, and here I count myself somewhat, what could be more offensive than to compare a lovely chicken sandwich to genocide? Similarly, EAs have pushed forward the frontiers of animal welfare work, investigating invertebrates and wild animal suffering. Some of the ideas being suggested, like major ecosystem redesigning, are controversial and extreme to say the least! Yet I am sure the reader is glad that we have not shunned these people.

One way of thinking about the EA approach to charity is that people should do two things: give a larger amount, and give more intelligently. However, over time we have come to appreciate that the potential with the latter is far higher than the former. The average American already gives over 2% of their income to charity, and it’s hard for most people to double their income even if they really try, so realistically at most there is scope for an order of magnitude increase or so. In contrast, we know there are many orders of magnitude difference in effectiveness between charities, even within one cause area.

We can use a similar decomposition for the development of the EA movement. We can grow by attracting new members, which is definitely valuable (so long as it doesn’t introduce value drift), but growing along this axis is difficult. We have already identified the easiest recruiting groups - elite universities - and I think it is fair to say it will be quite difficult to add an additional counterfactual order of magnitude in this way. Additionally, many of the people we recruit will be coming from similar communities, so the value of acquiring them is only the incremental value that the EA movement adds over their prior activities.

New prioritisation research, in contrast, offers vast potential for improvement. Not only was it the source of the 1000x multiplier within global health charities, it is what causes us to focus on third world health over the US in the first place - a huge improvement, but not an uncontroversial one. And outside of global health the gains have been even larger, potentially even flipping the sign of wildlife conservation, and offering us the entire Longtermist agenda.

Additionally, I think that intellectual daring cause prioritisation research is probably beneficial, on net, for attracting new members. Is it possible some people will be put off? Of course - practically anything you do will annoy some people, at the same time that it attracts others. It’s no secret that EA has grown in no small part by posing intellectual challenges to highly intellectual people and drawing them in. We are nerd-sniping; offering the chance to discuss some of the most important issues in the world with some of the most intelligent people in the world, for the low, low, price of 10% of your future income. It is no surprise that we draw heavily from academic philosophy departments and tech companies, and appeal to some hyper-analytical millionaires and billionaires.

If there truly are no new intellectual worlds left to conquer, only steady refinements to our existing mechanisms, then perhaps it would not be so harmful to cancel our pioneers. Ungrateful, perhaps, but if their work is done, could EA enter a chrysalis of cancellation, to emerge a fashionable and unimpeachably moderate movement, fully in sync with the moral fashions of the current year? Yet I see no reason to think that the consistent history of controversial ideas proving vital to the progression of the movement is over. The possibility of another crucial consideration being discovered, which transforms our understanding on an important topic and better guides our actions towards the good, is too important and too likely to be set aside.

This is EA’s unique contribution. Without the pioneering cause prioritization, without the courage to ask important questions that none have asked before, we add almost nothing to global charitable landscape. Only by offering something different and new - and in my opinion much, much better - is the EA project worthwhile.

Quotes from EA Leaders on searching for new ideas

This concern for intellectual inquiring into new causes is not a niche one; hopefully this section, consisting largely of quotes from a huge variety of EA sources about the importance of exploring new intellectual areas, will show this is a widely recognised issue. As these quotes are somewhat lengthly, if you are already convinced feel free to skip to the next section.

For example, 80k has written about the importance of investigating a wide variety of causes:

Moreover, the more people involved in a community, the more reason there is for them to spread out over different issues. ... Perhaps for these reasons, many of our advisors guess that it would be ideal if 5-20% of the effective altruism community's resources were focused on issues that the community hasn't historically been as involved in, such as the ones listed below.

Similarly, Ben Todd recently emphasised the importance of EA as an intellectual project investigating new ways of improving the world:

If anything, I’m even more convinced that the ideas are what matter most about EA, and that there should at least be a branch of EA that’s focused on being an intellectual project.

Rob recently wrote about the concentration of EAs in 'safe' topics as being a potential problem:

They feel low-risk and legitimate. People you meet can easily tell you're doing something they think is cool. And you might feel more secure that you're likely doing something useful or at least sensible.

In the EA handbook we have Kelsey on the importance of being open and supportive to weird ideas:

Next, we need to be continually monitoring for signs that the things we’re doing are actually doing harm, under lots of possible worldviews. That includes worldviews that aren’t intuitive, or that aren’t the way most people think about charity. … Basically, we need to cast a really, really wide net for possible ways we’re screwing up, so that the right answer is at least available to us.
Next, imagine someone walked into that 1840s EA group and said, ‘I think black people are exactly as valuable as white people and it should be illegal to discriminate against them at all,” or someone walked into the 1920s EA group and said, “I think gay rights are really important.” I want us to be a community that wouldn’t have kicked them out. I think the principle I want us to abide by is something like ‘if something is an argument for caring more about entities who are widely regarded as not worthy of such care, then even if the argument sounds pretty absurd, I am supportive of some people doing research into it. And if they’re doing that research with the intent of increasing everyone’s well-being and flourishing as much as possible, then they’re part of our movement’. ...
I hope we have space to hear out more speculative things, and specifically to hear out (1) arguments for caring about things we wouldn’t normally think to care about, (2) arguments that our society is fundamentally and importantly wrong, and (3) arguments that we are making important mistakes.

Indeed, EAG 2018 emphasised the importance of intellectual curiosity to find a new potential 'cause X':

The key idea of EA Global: San Francisco 2018 is ‘Stay Curious’. As more people take the ideas behind effective altruism seriously, we must continue to seek new problems to work on, and be mindful that we may still be missing ‘cause X’.

And Will spoke about it at length in 2016:

Given this, what we should be thinking about is: What are the sorts of major moral problems that in several hundred years we'll look back and think, "Wow, we were barbarians!"? What are the major issues that we haven't even conceptualized today?

It also features in CEA's Guiding Principles:

We are a community united by our commitment to these principles, not to a specific cause. Our goal is to do as much good as we can, and we evaluate ways to do that without committing ourselves at the outset to any particular cause. We are open to focusing our efforts on any group of beneficiaries, and to using any reasonable methods to help them.

The Guiding Principles even discuss the need to be open to weird ideas:

We recognise how difficult it is to know how to do the most good, and therefore try to avoid overconfidence, to seek out informed critiques of our own views, to be open to unusual ideas, and to take alternative points of view seriously.

The more classically minded reader might appreciate the wisdom of John Stuart Mill, one of the founders of Utilitarianism:

A state of things in which a large portion of the most active and inquiring intellects find it advisable to keep the general principles and grounds of their convictions within their own breasts, and attempt, in what they address to the public, to fit as much as they can of their own conclusions to premises which they have internally renounced, cannot send forth the open, fearless characters, and logical, consistent intellects who once adorned the thinking world. The sort of men who can be looked for under it, are either mere conformers to commonplace, or time-servers for truth, whose arguments on all great subjects are meant for their hearers, and are not those which have convinced themselves. Those who avoid this alternative, do so by narrowing their thoughts and interest to things which can be spoken of without venturing within the region of principles, that is, to small practical matters, which would come right of themselves, if but the minds of mankind were strengthened and enlarged, and which will never be made effectually right until then: while that which would strengthen and enlarge men’s minds, free and daring speculation on the highest subjects, is abandoned.
  • On Liberty, p40

Indeed, suppressing an idea is harmful for those who believe it and those who do not:

“[T]he peculiar evil of silencing the expression of an opinion is, that it is robbing the human race; posterity as well as the existing generation; those who dissent from the opinion, still more than those who hold it. If the opinion is right, they are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error.”
  • John Stuart Mill, On Liberty, p23

This topic has been discussed on the EA forum, including highly upvoted posts like this one:

EA is a nascent field; we should expect over time our understanding of many things to change dramatically, in potentially unpredictable ways. This makes banning or discouraging topics, even if they seem irrelevant, harmful, because we don’t know which could come to be important.
Fortunately, there are some examples we have to make this clear. For example, Making Discussions Inclusive provides a list of things that we should not discuss (or at least we should be very wary of discussing). We will argue that there are actually very good reasons for EAs to discuss these topics. Even in cases where it would not be reasonable to dispute the statement as given, we suggest that people may often be accused of rejecting these statements when they actually believe something much more innocent.

and this one, comparing recent trends in the US to the Cultural Revolution in China:

If the United States were to experience a cultural revolution-like event, it would likely affect nearly all areas of impact that effective altruists care about, and would have profound effects on our ability to produce free open-ended research on controversial issues. Given that many of the ideas that effective altruists discuss -- such as genetic enhancement, factory farming abolition, and wild animal suffering -- are controversial, it is important to understand how our movement could be undermined in the aftermath of such an event. Furthermore, conformity pressures of the type exhibited in the Chinese cultural revolution could push important threads of research, such as AI alignment research, into undesirable directions.

This post argues against trying to fight back against cancellations, as it is expensive and risky to do so:

A friend of mine has parents who lived through the cultural revolution. At least one grandparent made a minor political misplay (his supervisor wanted him to cover up embezzling resources, he refused) and had his entire family history (including minor land ownership in an ancestor) dragged out of him. He was demoted, berated for years, had trash thrown at him etc. This seemed unfortunate, and likely limited his altruistic impact.

However, even he agreed that his original stance

As a general strategy, it seems much better for most people in the community to [...] quickly disavow any associations that could be seen as potentially problematic.

was too strong, because it is bad for team-building (quoting from a third party he agreed with):

If I expect my peers to lie or stab me in the back as soon as this seems useful to them, then I’ll be a lot less willing and able to work with them. This can lead to a bad feedback loop, where EAs distrust each other more and more as they become more willing to betray each other.
Highly knowledgeable and principled people will tend to be more attracted to groups that show honesty, courage, and integrity. There are a lot of contracts and cooperative arrangements that are possible between people who have different goals, but some level of trust. Losing that baseline level of trust can be extremely costly and cause mutually beneficial trades to be replaced by exploitative or mutually destructive dynamics.
Camaraderie gets things done. If you can create a group where people expect to have each other’s back, and expect to be defended if someone lies about them, then I think that makes the group much more attractive to belong to, and helps with important things like internal cooperation.

I also recommend Anna’s highly upvoted comment, strongly disagreeing with the post:

It seems to me that the EA community's strength, goodness, and power lie almost entirely in our ability to reason well (so as to be actually be "effective", rather than merely tribal/random). It lies in our ability to trust in the integrity of one anothers' speech and reasoning, and to talk together to figure out what's true.
Finding the real leverage points in the world is probably worth orders of magnitude in our impact. Our ability to think honestly and speak accurately and openly with each other seems to me to be a key part of how we access those "orders of magnitude of impact."

Even pro-censorship posts like this only advocating restricting some topics from being discussed in some spaces:

We argue that being a part of an inclusive community can sometimes mean refraining from pursuing every last theory or thought experiment to its end in public places.

And even then, a highly critical comment received far more karma than the original post, as well as this excellent response.

To my knowledge no-one has argued that people should be banned just for having discussed an unrelated topic in an unrelated location! The move by EA Munich, which we will go over below, was considerably outside the Overton Window.

There is of course extensive discussion of and brave opposition to the problems of cancel culture outside of EA, omitted here for brevity’s sake, but one could do worse than start the Philadelphia Statement.

EA Munich and Robin Hanson

Robin Hanson has been one of the oldest intellectual allies of the EA movement. His work has been ground-breaking on a number of topics that pertain to EA, from Signalling to the Great Filter to AGI takeoff to Prediction Markets. His blog, co-hosted for a while with Eliezer, was one of the key originators to the EA movement. This involvement has continued over time, providing a steady source of incisive yet friendly criticism that is so vital for any intellectual sound movement, including speaking at multiple EA Global events.

Scott Aaronson, everyone’s favourite Quantum Cryptography Theorist and author of an excellent book which I will definitely finish sometime very soon, had this to say about Robin intellectual virtues:

I’ve met many eccentric intellectuals in my life, but I have yet to meet anyone whose curiosity is more genuine than Robin’s, or whose doggedness in following a chain of reasoning is more untouched by considerations of what all the cool people will say about him at the other end.
So if you believe that the life of the mind benefits from a true diversity of opinions, from thinkers who defend positions that actually differ in novel and interesting ways from what everyone else is saying—then no matter how vehemently you disagree with any of his views, Robin seems like the prototype of what you want more of in academia. To anyone who claims that Robin’s apparent incomprehension of moral taboos, his puzzlement about social norms, are mere affectations masking some sinister Koch-brothers agenda, I reply: I’ve known Robin for years, and while I might be ignorant of many things, on this I know you’re mistaken. Call him wrongheaded, naïve, tone-deaf, insensitive, even an asshole, but don’t ever accuse him of insincerity or hidden agendas. Are his open, stated agendas not wild enough for you??
In my view, any assessment of Robin’s abrasive, tone-deaf, and sometimes even offensive intellectual style has to grapple with the fact that, over his career, Robin has originated not one but several hugely important ideas—and his ability to do so strikes me as clearly related to his style, not easily detachable from it.

Scott Alexander wrote this of Robin’s discernment back in the relatively early days of Effective Altruism, not that long after the name was coined:

Then Robin Hanson of Overcoming Bias got up and just started Robin Hansonning at everybody. First he gave a long list of things that people could do to improve the effectiveness of their charitable donations. Then he declared that since almost no one does any of these, people don’t really care about charity, they’re just trying to look good. Then he told the room – this beautiful room in the Faculty Club, full of sophisticated-looking charity donors who probably thought they were there to get a nice pat on the back – that they probably thought that just because they were attending an efficient charity talk they weren’t like that, but that probabilistically there was excellent evidence that they were.

Even Bryan Caplan, one of the foremost advocates of appeasement, speaks highly of Robin’s character:

Virtually everyone who knows Robin personally vouches for his sincerity and kindness.

And his intellect:

In a similar vein, since we should expect a man of Robin’s intelligence to to produce a steady stream of original insight, the fact that he just unveiled yet another gem is no reason to be amazed.

Of course, in some sense this is by-the-by: even were Robin an irredeemable scoundrel, it would still be worthwhile defending him from unjust treatment. If ordinary people see that even the unpopular are defended, they can have confidence that they too are secure. In contrast, if they see that security comes only with popularity, people will be encouraged to constantly signal their in-group bonafides, and to always be watching over their shoulders that the mob is coming for them next.

The trouble with fighting for human freedom is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all.
  • H. L. Mencken

Recently, EA Munich decided to deplatform Robin Hanson after inviting him to give a talk on tort reform. At the time they briefly summarised this as due to his ‘controversial claims’; subsequently they explained themselves somewhat in a writeup, which is apparently a “pretty thorough” description of their thought process. It, along with my subsequent communications with CEA on this topic, form the primary basis of this article.

This decision has been widely criticised, both on this website and elsewhere. I agree this decision was a very poor one, and will focus on what we can do better next time. The EA Munich team are volunteers, and I'm sure relatively junior, so I do not place too much responsibility on them, though I am extremely disappointed with the advice that came from CEA, who should know better. As such, this article lays out what I see as the main mistakes that were made, and how we can avoid making them in the future.

In his blog, Aaron suggests that there is not much more that CEA (his employer) could have done, as the decision is ultimately up to the local group. Similarly, in my communication with them, CEA repeatedly emphasised that ultimately it all comes down to the local group. Naturally, I fully agree with this - the independence of local groups is something that CEA should and must respect. But I disagree that this lets CEA off the hook. In cases where a local group comes to CEA for guidance, CEA has the obligation to provide the best possible advice, and CEA clearly failed to do so here.

Mistakes and How to Avoid Them

I realise that a generalised exhortation to resist cancel culture can be difficult, especially when presented with plausible seeming and highly specific considerations in the opposite direction. So in this section I will try to forensically lay out the specific mistakes that were made in this instance, and how we can avoid them in future.

Defend core EA activities

Most important is to constantly bear in mind that the purpose of local groups is not the avoidance of conflict, or minimising the number of people who are annoyed with you: it is promoting the goals and values of the Effective Altruism movement. In this case, EA Munich, and CEA's advice to them, directly undermined one of the core tenants of EA, which is the freedom and courage to investigate new potential cause areas.

As we discussed at length in the previous section, this requires people be willing to investigate new moral issues, which obviously are going to sound weird (and potentially immoral!) to many people. To avid carnivores, the idea of investing in animal rights sounds like immorally imposing costs and restrictions on real human people, and neglecting the real problems people have that we could be solving, for the sake of … animals? But as EAs, we should push past the 'ambient sense of unease' and evaluate such new ideas logically. Even if poorly presented, we should be willing to steelman them and give them a fair hearing; if the idea's originator saves us this work by writing lengthy and detailed arguments in their favour, all the better.

The absolute minimum requirement is that CEA not actively undermine people who are doing this work. But really, CEA should be living up to the talk and actively supporting these people.

Robin is precisely the sort of thinker who is disproportionately likely to come up with the next Cause X. He is the intellectual father of prediction markets, a subject of immense discussion and advocacy in the EA community. He has written on the subject of human hypocrisy, and helped shed light on the very reasons that people ignore EA analysis in favor of their lower motives, and was the first to argue in the EA community for giving later instead of giving now. He wrote extensively on AI before it was a major focus of the EA community, in his debate on AI FOOM and his writing on emulations. He wrote one of the classic papers about modeling history as a series of exponential growth modes, research currently being pursued with substantial resources by Open Philanthropy’s David Roodman. Robin’s production of novel ideas has greatly exceeded most academics, typically written about in accessible blogposts. Politics isn’t about policy. Against prestige. This is the dream time. Stories are like religion. Inequality Is About Grabbing. This AI Boom Will Also Bust. If you want to know what is a likely Cause X, a decent way to approach that question would be to start by looking at whatever Robin Hanson has been blogging about a lot.

Of course, being a prolific producer of premium prioritisation posts doesn’t mean we should give someone a free pass for behaving immorally. For all that EAs are consequentialists, I don’t think we should ignore wrongdoing ‘for the greater good’. We can, I hope, defend the good without giving carte blanche to the bad, even when both exist within the same person. Which, then, was the target in the cancelling of Robin, as exemplified in the Slate article? Did they correctly castigate his vice, or were they slandering his virtue?

It seems clear to me that the areas of Robin's work referenced in the Slate article - things like his post Two Types of Envy, where he points to a perceived inconsistency in how people talk about financial and sexual inequality, and the negative societal effects of, and mental health impacts to, large numbers of disaffected low-status males (e.g. the Incel movement) - fall firmly within the category of what we are talking about by ‘Cause X’ research. For those who dare to discuss this as being a problem that afflicts them, society is quick to offer mockery, but almost never sympathy or solutions. Robin analyses these issues in detail, compares them to another major cause, and conducts empirical work to try to estimate their magnitude.

Now, mere indifference from EAs could be understood - many people make proposals for a Cause X, and most of them are terrible. People do not have an automatic right to a hearing, because our time is limited and our attention could be spent elsewhere. Similarly, disagreement is a perfectly reasonable response.

I can see the argument that we do not necessarily have to publicly defend everyone who is attacked unfairly, as our political and reputational capital is finite. This is a bit of a dangerous path to go down - if we do not stand up for our friends, who will stand up for us? - but it does highlight an important consideration, and I wouldn’t blame someone who took this perspective. Defending people from unfair treatment is good and virtuous, but supererogatory.

However, what I think is clear is that EA, and CEA specifically, should not treat someone worse as a result of their good faith attempts at EA prioritisation research than they would have otherwise. To violate this is a fundamental betrayal of the movement and the community. If you would have had someone speak in the absence of their innovative EA work, it is unacceptable to deplatform them in response to smears resulting from this work.

Distinguishing between truth-seeking criticism and attempted cancellations.

Often, criticism is good! People are often wrong, and it is good to point this out, if possible in a sensitive fashion that is not unnecessarily nasty. But not all criticism is equal.

An example of what I think of as relatively good criticism is Alexey Guzey’s article criticising Why We Sleep. This article is good because it is logical and methodical, laying out precise arguments for why we should be sceptical of the book. It does not attempt to distort the author’s intentions; it shows that even a generous reading will find the book sadly deficient. Nor does he cherry-pick a small section; Alexey clearly explains which section of the book he focused on and why. The article does not rely on insinuation, nor does it even directly criticise the author at all. Most importantly, it aims to establish truth from falsehood.

Unfortunately, not all criticism is like this. In EA Munich’s writeup, they highlighted an article in Slate. I think it should be clear to even an uninformed reader that this piece is not in any way a fair or objective account. The Slate writer is deliberately attempting to frame him in the worst possible light, in an article full of innuendo and viciousness. There is no careful evaluation of Robin’s arguments - indeed, only one paragraph, towards the end of the article, even pretends to be forming a counterargument. It does not attempt a charitable reading of Robin. It willfully selects a handful of blog posts of his solely to make him look as bad as possible. This is not an article that is trying to make our beliefs about the world more accurate - it is trying to belittle and humiliate someone. It is a hit piece.

One technique for doing this analysis is to examine the Fnords: if we remove the filler words from the first few lines of the Slate piece, I think it is clear what the subtext is, and how fair the article is going to be:

economist creepy libertarian-leaning professor notorious odd disconcerting socio-sexual...

I would be very surprised if any article that began with such a tone was conducive to truth-seeking - save, perhaps, as a cautionary example.

So my advice is to carefully distinguish between truth-seeking criticism of someone’s arguments, and social shaming ad hominem insinuation against a person. The former is potentially very valuable, the latter… not so much. This is something that individual local groups should do, and that CEA, if it sees they are faltering in this regard, can step in and gently provide guidance.

Determining the relevance of criticism

Some criticism is highly relevant. One particular thing that criticism can tell us is that a potential speaker is not as much of an expert as we thought. For example, if you were considering inviting a famous academic to tell you about the science of sleep, learning that his book was highly inaccurate is valuable, because it implies that your potential speaker is actually less knowledgeable than you assumed about the topic. If you invite him, he might tell you false things about sleep, which would frustrate your purpose of learning true things about sleep.

In contrast, some criticism is not relevant. For example, criticism is less relevant if it is concerned with a different topic. In the case of the Slate article mentioned above, the ‘argument’ is basically that Robin is creepy because of the topics he wrote about in some blog posts. Given that EA Munich had invited him to speak about a totally different topic, the relevance is significantly reduced. If the topic of his talk is also creepy… well, maybe you shouldn’t have invited him to talk about it! Furthermore, as the Slate argument didn’t really bother much to really argue that Robin was actually mistaken, let alone systematically fraudulent like the sleep article does, it doesn’t really give us much reason to doubt Robin’s general intellectual calibre.

Perhaps because of the Horns Effect (mirror to the Halo Effect), it is easy to allow one problem to ‘spill over’ and affect your evaluation of a person’s other attributes, even if this is not logical. I encourage you to bear in mind that, even if someone has flaws, they may not be relevant flaws. For this we can consult no less an authority than the US President (no, the previous one):

You know this idea of purity and you're never compromised and you're always politically woke and all that stuff, you should get over that quickly. The world is messy, there are ambiguities. People who do really good stuff have flaws.
  • Obama, 2019

Apply your standards consistently

Rules and standards are very important for organising any sort of society. However, when applied inconsistently they can be used as a weapon to attack unpopular people while letting popular people off the hook. If you apply a standard only when external actors demand it, you are letting them control you. But by being cognizant of this, you can protect yourself.

In this case, one of the main reasons EA Munich gave for deplatforming Robin are that they are afraid of being associated with controversial ideas, and of the consequences of letting Robin talk. So the standard here seems to be that controversial ideas should be avoided.

However, just the previous month they hosted a talk on psychedelic drugs (according to Facebook). Needless to say, psychedelic drugs are a highly controversial topic! In the US they are generally considered Schedule I drugs with a high potential for abuse. Possessing these drugs is in general (with very limited exceptions) a felony, with the potential for very harsh penalties. The War of Drugs is a highly political topic on which people have very strong opinions. In this case, EA Munich could have noticed that a rule against controversial topics would have excluded this previous talk that they had been happy to let take place.

Of course, there is a big difference between the talk on psychedelics and Robin's talk, which is that the subject of Robin's talk was a totally different and I think unobjectionable topic (reforming tort law) - suggesting a greater degree of concern would have been due about the psychedelics talk.

So I recommend you consider the reasons being given for deplatforming a speaker, and think about whether you would really want to apply those principles in general.

Focus on getting the decision right, rather than appearances.

The other reason they gave focused on the potential negative consequences of letting Robin talk. This consisted in a frankly bizarre paragraph (quote below), suggesting that allowing Robin to give a zoom talk to ~20-30 people, on an unrelated topic, might accidentally undo feminism and civil rights (or perhaps re-institute slavery? unclear), despite neither being his intention. I am almost loathe to quote it because it seems like a strawman:

Specifically, women's rights have been suppressed for most of human history, and we believe that the rise of emancipatory women's movements has been a tremendous humanitarian achievement over the last few hundred years. Statements such as Hanson's might rekindle misogynistic sentiments and destroy some of the progress made so far, even if that is not Professor Hanson's intention. In a similar vein, we see the discussion around the tweet concerning Juneteenth. We also believe that Professor Hanson perhaps underestimates the impact of these statements.

If this was a serious concern of theirs then the nicest thing I can say is that they were hopelessly miscalibrated.

It can be difficult to tell when one’s reasoning is amiss. However, I think this is where CEA could have helped. A reasonable thing to do, when learning that this was a concern, would be to gently argue that EA Munich was exaggerating the threat. The fact that they would write such an argument should have been a sign to CEA that EA Munich was not being rational, and as such CEA should encourage them to reconsider their decision.

Instead, apparently CEA encouraged them to reconsider the language in their justification:

Their language on him destroying the progress of feminism was originally stronger, and I suggested they tone it down.
  • Personal communication

This is, I think, extremely wrongheaded. Our objective should be to make the right decision. A public summary of the decision-making should contain an accurate account of the decision-making. If you feel the need to ‘tone down’ part of it, this could be a sign you regret part of that decision… in which case you should consider changing your mind. CEA should have taken the opportunity to suggest that EA Munich had misjudged the situation, and that they should consider changing their mind.

Think about the wider impact and precedent.

It is natural for the organisers of a small group to just want the whole thing to go away. They just wanted to host some nice discussion groups and tell people about AMF - they didn’t ask for any of this! In such a scenario, giving in to the pressure and disinviting the speaker seems like the easy option. Maybe it’s not the right one - EA Munich did mention they were worried about Cancel Culture, so they had some understanding of the issues - but it is at least an end to it.

It seems such considerations were high in the minds of EA Munich, who spoke of wanting to take the action that would leave the fewest people annoyed with them.

Alas, this is a very poor decision criterion. While it is easy, in EA we try to do what is right, and EA groups should actually try to live up to these virtues rather than ignoring them because they're hard. EA groups are designed for the benefit of the principles and objective of the EA movement, not the convenience of the organisers. By giving in, we grant a heckler’s veto to nerdowells. Every instance of backing down creates a precedent that controversial speakers should be canceled, which affects both this group and all other local groups. And it encourages people to be quicker to take offense and to condemn, a danger that has been well understood since Kipling.

It’s important not to think of giving in as being the ‘middle’ route, or a ‘compromise’ decision. I can see why people might naively think this: they see some people who support a speaker, and some who condemn him, so surely the middle ground is to simply not feature the speaker in any way? But this is not the case - tolerance itself is the middle ground, between Catholic and Protestant, or Right and Left. Giving one side - or rather, a small group of extremists on one side - a veto is far from evenhanded: it immensely privileges that group. Alternatively, we could afford everyone such respect - not merely the loudest and most aggressive - which would at least be fair. But as almost everything is offensive to somebody, the range of permitted opinions left would be very small indeed! Only by saying to partisans of all stripes, “I know you are offended by this, but we judge ideas for ourselves, on their merit” can we have discussion unfettered by a political censor.

Now, one could object that this is hyperbole. After all, a group is not obliged to invite Robin to speak in the first place. Why then can they not equally uninvite him? Yes, it will be a little inconvenient, but there’s a pandemic, so it’s not like anyone has paid for plane tickets or hotels.

Here one man’s modus ponens is another’s modus tollens. For the same reasons I think it is bad to deplatform a speaker as the result of a vicious cancel culture attack, I think it would be bad to not invite them in the first place for these reasons. There are many acceptable reasons not to invite someone - like timing, or relevance, or having a full schedule, or simple being unaware of their existence. But appeasing cancel culture is not one of them. We would be ill-served if, to avoid the risk of ever having to deplatform someone, groups simply became ultra-conservative about invitations and never included anyone who wasn’t a CEA employee!

Here I think an analogy with US Labour Law might be illuminating. Most workers in the US have ‘At-Will’ contracts: this means that the worker can quit, and the employer can fire them, at basically any time for any reason, except for for a narrow group of forbidden motivations. You can quit because you’re not paid enough, or your colleagues are annoying, or you’re just sick of the colour of the carpets. You can fire someone for being unproductive, for having a name beginning with the letter ‘G’, or for supporting the wrong sports team. But you can’t fire them because of their race, or because they refused to break the law, or because they took maternity leave. This is because these are properties that the US Legal System considers important enough to protect, even in the general context of freedom of association.

Similarly, in general local groups should be free to do more or less what they want. We should want to let people explore new approaches, which might be better suited for promoting Effective Altruism. There is simply a narrow class of activities which should be strongly avoided, and which CEA should strongly advise against: deplatforming a speaker because of Cancel Culture is such a proscribed activity.


In this particular scenario, here are some things I think it would have been good for CEA to do, when asked for advice by EA Munich:

  1. Remind them that openness to unusual ideas is one of the guiding principles of Effective Altruism, and that local groups should uphold and promote this.
  2. Clarify the importance of fundamental cause research that challenges existing ideas to the movement, and that we should not punish people for engaging in it.
  3. Explain that the Slate article they linked is not a reliable source of information, and encourage them to refer to Robin's own work.
  4. Explain that deplatforming someone is a serious action, and widely seen as not equivalent to simply never having invited them in the first place.
  5. Explain that Robin is very unlikely to accidentally undo feminism during his talk, and this should not be a major part of their decision making process.
  6. Not take EA Munich’s claim that they understood the dangers of Cancel Culture at face value: actively discuss this with them to ensure they understand why it is harmful to the movement.
  7. To the extent that EA Munich made their decision for poor reasons, encourage them to reconsider.

The final decision is of course up to the local organisers. However, I think by providing this advice, CEA could have better equipped them to make the decision in an epistemically virtuous way that supported the goals of the movement.


Thanks to Nick Whitaker and several invaluable anonymous proofreaders for their extremely helpful feedback. Any mistakes remain my own. A draft of this document was shared with CEA and EA Munich prior to publication, and one section removed as a gesture of goodwill.

edited 2020-10-15: typos


78 comments, sorted by Highlighting new comments since Today at 9:46 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I appreciate that Larks sent a draft of this post to CEA, and that we had the chance to give some feedback and do some fact-checking.

I agree with many of the concerns in this post. I also see some of this differently.

In particular, I agree that a climate of fear — wherever it originates— silences not only people who are directly targeted, but also others who see what happened to someone else. That silencing limits writers/speakers, limits readers/listeners who won’t hear the ideas or information they have to offer, and ultimately limits our ability to find ways to do good in the world.

These are real and serious costs. I’ve been talking with my coworkers about them over the last months and seeking input from other people who are particularly concerned about them. I’ll continue to do that.

But I think there are also real costs to pushing groups to go forward with events they don’t want to hold. I’m still thinking through how I see the tradeoffs between these costs and the costs above, but here’s one I think is relevant:

It makes it more costly to be an organizer. In one discussion amongst group organizers after the Munich situatio... (read more)

It makes it more costly to be an organizer. In one discussion amongst group organizers after the Munich situation, one organizer wrote about the Peter Singer talk their group hosted. [I’m waiting to see if I can give a fuller quote, but their summary was about how the Q&A session got conflicted enough that the group was known as “the group that invited Peter Singer” for two years and basically overpowered any other impression students had of what the EA group was about.]

Just for context, if anyone is unaware, Peter Singer is extremely controversial in Germany, much (/even) more so than in the English-speaking world. There was a talk by him in Cologne a few years ago, and everyone was a bit surprised it didn't get shouted down by student activists.

So I can definitely see this happening, and sympathise with the desire for it not to happen again, even though I still think the Hanson decision was ill-made.

+1, in the German-speaking area, activists have tried to prevent people from gaining physical access to where Singer's talk was to be hosted, and Singer was even physically assaulted on one occasion (a couple of decades ago though). Some venues have cancelled him. There are often protests (by disability rights activists, religious people, etc.) where he speaks.

As one of the organisers of the EA Munich group this was the first thing I thought of when we heard about the press coverage of Robin Hanson: What can we learn from the EA association of the controversies of Peter Singer. I was thinking of your comment and of Ben Todd's quote "Once your message is out there, it tends to stick around for years, so if you get the message wrong, you’ve harmed years of future efforts." I think there is much harm that can be done in canceling but it should be weighed against the potential harm of hurting the movement in a country where values and sentiments can be different than in the english speaking world.

For me the Robin Hanson talk would have been the first event as a co-organiser and seeing a potential cooperation partner unearthing the negative press about Robin Hanson and telling us that they would not be able to work with us if we hosted him, was an indication that we shouldn't rush to hold this talk. Oliver Habryka summarised this pretty well:

Having participated in a debrief meeting for EA Munich, my assessment is indeed that one of the primary reasons the event was cancelled was due to fear of disruptors showing up
... (read more)
4Julia_Wise3moI got permission to add the full quote, though the meaning is the same. This example was actually in the US.
4willbradshaw3moAh, then my comment was based on a misunderstanding. Apologies.
5Julia_Wise3moBut still relevant for the Munich organizers, since Singer seems to get protested more per event in Germany than in other countries.

I don't have any arguments over cancel culture or anything general like that, but I am a bit bothered by a view that you and others seem to have. I  don't consider Robin Hanson an "intellectual ally" of the EA movement; I've never seen him publicly praise it or make public donation decisions, but he has claimed that do-gooding is controlling and dangerous, that altruism is all signaling with selfish motivations, that we should just save our money and wait for some unspecified future date to give it away, and that poor faraway people are less likely to exist according to simulation theory so we should be less inclined to help them. On top of that he made some pretty uncharitable statements about EA Munich and CEA after this affair. And some of his pursuits suggest that he doesn't care if he turns himself into a super controversial figure who brings negative attention towards EA by association. These things can be understandable on their own, you can rationalize each one, but when you put it all together it paints a picture of someone who basically doesn't care about EA at all. It just happens to be the case that he was big in the rationalist blogosphere and lots of EAs (includi... (read more)

I want to open by saying that there are many things about this post I appreciate, and accordingly I upvoted it despite disagreeing with many particulars. Things I appreciate include, but are not limited to:

-The detailed block-by-block approach to making the case for both cancel culture's prevalence and its potential harm to the movement.

-An attempt to offer a concrete alternative pathway to CEA and local groups that face similar decisions in future.

-Many attempts throughout the post to imagine the viewpoint of someone who might disagree, and preempt the most obvious responses.

But there's still a piece I think is missing. I don't fault Larks for this directly, since the post is already very long and covers a lot of ground, but it's the area that I always find myself wanting to hear more about in these discussions, and so would like to hear more about from either Larks or others in reply to this comment. It relates to both of these quotes.

Of course, being a prolific producer of premium prioritisation posts doesn’t mean we should give someone a free pass for behaving immorally. For all that EAs are consequentialists, I don’t think we should ignore wr
... (read more)

It's a good question. I've thought about this a bit in the past.

One surprising rule is that overall I think people with a criminal record should still be welcome to contribute in many ways. If you're in prison, I think you should generally be allowed to e.g. submit papers to physics journals, you shouldn't be precluded from contributing to humanity and science. Similarly, I think giving remote talks and publishing on the EA Forum should not be totally shut off (though likely hampered in some ways) for people who have behaved badly and broken laws. (Obviously different rules apply for hiring them and inviting them to in-person events, where you need to look at the kind of criminal behavior and see if it's relevant.) 

I feel fairly differently to people who have done damage in and to members of the EA community. Someone like Gleb Tsipursky hasn't even broken any laws and should still be kicked out and not welcomed back for something like 10 years, and even then he probably won't have changed enough (most people don't).

In general EA is outcome-oriented, it's not a hobby community, there's sh*t that needs to be done because civilization is inadequate and literally everything is sti... (read more)

2jackmalde3moAs I said in an earlier comment, I think we need to evaluate this on a case-by-case basis and ultimately make decisions based on a (rough) calculation of expected benefit vs expected harm of letting someone speak. So for me there isn't really a standard "line on behaving immorally". For example, if someone has bad character but it is genuinely plausible they might come up cause X, then I reckon they should (probably) be allowed to speak. So I don't think actual 'rules' are helpful. General 'reasons' why we might or might not invite a speaker on the other hand are certainly helpful and I think Larks alludes to some in this post (for example the cause X point!). I didn't actually interpret Lark's post as trying to contribute to the "ongoing prosecution-and-defence of Robin's character or work", but instead think it is trying to add to the cancel culture conversation more generally, using Robin's case as a useful example.
8AGB3moThanks for your response. Sorry, this is on me. The original draft of that sentence read something like "I agree with Khorton below that little is being served by an ongoing prosecution-and-defence of Robin's character or work, so I'm not going to weigh in again on those specific points and request others replying to this comment do the same, instead focusing on the question of what rules we do/don't want in general". I then cut the sentence down, but missed that in doing so it could now be read as implying that this was Larks' objective. That wasn't intentional, and I don't think this.
-4Jonas Vollmer3mo(Retracted.)

To better understand your view, what are some cases where you think it would be right to either

  1. not invite someone to speak, or
  2. cancel a talk you've already started organising,

but only just?

That is, cases where it's just slightly over the line of being justified.

Thank you for writing this important post Larks!

I would add that the harm from cancel culture's chilling effect may be a lot more severe than what people tend to imagine. The chilling effect does not only prevent people from writing things that would actually get them "canceled". Rather, it can prevent people from writing things that they merely have a non-neglectable credence (e.g. 0.1%) of getting them canceled (at some point in the future); which is probably a much larger and more important set of things/ideas that we silently lose.

+1. I also think that the chilling effect can extend to people's thoughts, i.e., limiting what people even let themselves think let alone write.

See also https://www.lesswrong.com/posts/2LtJ7xpxDS9Gu5NYq/open-and-welcome-thread-october-2020?commentId=YrRcRxNiJupZjfgnc

ETA: In case it's not clear, my point is that there's also an additional chilling effect from even smaller but more extreme tail risks.

I urge those who are concerned about cancel culture to think more strategically. For instance, why has cancel culture taken over almost all intellectual and cultural institutions? What can EA do to fight it that those other institutions couldn't do, or didn't think of? Although I upvoted this post for trying to fight the good fight, I really doubt that what it suggests is going to be enough in the long run.

Although the post includes a section titled "The Nature of Cancel Culture", it seems silent on the social/political dynamics driving cancel culture's quick and widespread adoption. To make an analogy, it's like trying to defend a group of people against an infectious disease that has already become a pandemic among the wider society, without understanding its mechanism of infection, and hoping to make do with just common sense hygiene.

In one particularly striking example, I came across this article about a former head of the ACLU. It talks about how the ACLU has been retreating from its free speech principles, and includes this sentence:

But the ACLU has also waded into partisan political issues, at precisely the same time as it was retreating on First Amendment issues.

Does it... (read more)

This comment expresses something I was considering saying, but more clearly than I could. I would add that thinking strategically about this cultural phenomenon involves not only trying to understand its mechanism of action, but also coming up with frameworks for deciding what tradeoffs to make in response to it. I am personally very disturbed by the potential of cancel culture to undermine or destroy EA, and my natural reaction is to believe that we should stand firm and make no concessions to it, as well as to upvote posts and comments that express this sentiment. This is not, however, a position I feel I can endorse on reflection: it seems instead that protecting our movement against this risk involves striking a difficult and delicate balance between excessive and insufficient relaxation of our epistemic standards. By giving in too much the EA movement risks relinquishing its core principles, but by giving in too little the movement risks ruining its reputation. Unfortunately, I suspect that an open discussion of this issue may itself pose a reputational risk, and in fact I'm not sure it's even a good idea to have public posts like the one this comment is responding to, however much I agree with it.

I would add that thinking strategically about this cultural phenomenon involves not only trying to understand its mechanism of action, but also coming up with frameworks for deciding what tradeoffs to make in response to it. I am personally very disturbed by the potential of cancel culture to undermine or destroy EA, and my natural reaction is to believe that we should stand firm and make no concessions to it, as well as to upvote posts and comments that express this sentiment. This is not, however, a position I feel I can endorse on reflection[...]

This seems right to me, and I upvoted to support (something like) this statement. I think there's a great deal of danger in both directions here.

(Not just for reputational reasons. I also think that there are lots of SJ-aligned – but very sincere – EAs who are feeling pretty alienated from anti-CC EAs right now, and it would be very bad to lose them.)

It seems instead that protecting our movement against this risk involves striking a difficult and delicate balance between excessive and insufficient relaxation of our epistemic standards. By giving in too much the EA movement risks relinquishing its core principles, but by giving in to

... (read more)
4Pablo3moYes, I agree. What I'm uncertain about is whether it's desirable to have more of these posts at the current margin. And to be clear: by saying I'm uncertain whether it's a good idea, I don't mean to suggest it's not a good idea; I'm simply agnostic.
4willbradshaw3moOkay, sure, at the margin I agree it's tricky. Both for reputational reasons, and the broad-tent/community-cohesion concerns I mention above.
2Milan_Griffes3moTrump demonstrates that thoroughgoing shamelessness effectively wards off cancellation, at least in the short run.

I disagree. Trump draws his power from the Red Tribe; the Blues can't cancel him because they don't have leverage over him.

We, by contrast, are mostly either Blues ourselves or embedded in Blue communities.

Can you give an example of someone or some community in a situation like ours, that adopted a strategy of thoroughgoing shamelessness, and that successfully avoided cancellation?

4Milan_Griffes3moAgree that the Blues can't cancel Trump. Note that being affiliated with Red Tribe isn't sufficient to avoid cancellation (though it probably helps) – see Petraeus [https://en.wikipedia.org/wiki/Petraeus_scandal], see the Republicans on these lists: 1 [https://en.wikipedia.org/wiki/2017%E2%80%9318_United_States_political_sexual_scandals] , 2 [https://en.wikipedia.org/wiki/List_of_federal_political_sex_scandals_in_the_United_States] Jordan Peterson seems basically impossible to cancel due to a combination of his shamelessness & his virtue (he isn't really Blue Tribe though). Same for Joe Rogan and Tyler Cowen.
5Tsunayoshi3moJordan Peterson is probably indeed a good example. A more objective way to describe his demeanor than shamelessness is "not giving in". One major reason why he seems to be popular is his perceived willingness to stick to controversial claims. In turn that popularity is some form of protection against attempts to get him to resign [https://thevarsity.ca/2017/11/29/hundreds-sign-open-letter-to-u-of-t-admin-calling-for-jordan-petersons-termination/] from his position at the University of Toronto. However, I think that there are significant differences between Peterson and EA's situation, so Peterson's example is not my endorsement of a "shamelessness" strategy.

It seems like you believe that one's decision of whether or not to disinvite a speaker should depend only on one's beliefs about the speaker's character, intellectual merits, etc. and in particular not on how other people would react.

Suppose that you receive a credible threat that if you let already-invited person X speak at your event, then multiple bombs would be set off, killing hundreds of people. Can we agree that in that situation it is correct to cancel the event?

If so, then it seems like at least in extreme cases, you agree that the decision of whether or not to hold an event can depend on how other people react. I don't see why you seem to assume that in the EA Munich case, the consequences are not bad enough that EA Munich's decision is reasonable.

Some plausible (though not probable) consequences of hosting the talk:

  • Protests disrupting the event (this has previously happened to a local EA group)
  • Organizers themselves get cancelled
  • Most members of the club leave due to risk of the above or disagreements with the club's priorities

At least the first two seem quite bad, there's room for debate on the third.

In addition, while I agree that the e... (read more)

2Milan_Griffes3moAlso of The Apology [http://classics.mit.edu/Plato/apology.html], though that's obviously an extreme case.

Naturally, you have to understand Rohin, that in all of the situations where you tell me what the threat is, I'm very motivated to do it anyway? It's an emotion of stubbornness and anger, and when I flesh it out in game-theoretic terms it's a strong signal of how much I'm willing to not submit to threats in general.

Returning to the emotional side, I want to say something like "f*ck you for threatening to kill people, I will never give you control over me and my community, and we will find you and we will make sure it was not worth it for you, at the cost of our own resources".

Yeah, I'm aware that is the emotional response (I feel it too), and I agree the game theoretic reason for not giving in to threats is important. However, it's certainly not a theorem of game theory that you always do better if you don't give in to threats, and sometimes giving in will be the right decision.

we will find you and we will make sure it was not worth it for you, at the cost of our own resources

This is often not an option. (It seems pretty hard to retaliate against an online mob, though I suppose you could randomly select particular members to retaliate against.)

Another good example is bullying. A child has ~no resources to speak of, and bullies will threaten to hurt them unless they do X. Would you really advise this child not to give in to the bully?

(Assume for the sake of the hypothetical the child has already tried to get adults involved and it has done ~nothing, as I am told is in fact often the case. No, the child can't coordinate with other children to fight the bully, because children are not that good at coordinating.)

Another case where 'precommitment  to refute all threats' is an unwise strategy (and a case more relevant to the discussion, as I don't think all opponents to hosting a speaker like Hanson either see themselves or should be seen as bullies attempting coercion) is where your opponent is trying to warn you rather than trying to blackmail you. (cf. 1, 2)

Suppose Alice sincerely believes some of Bob's writing is unapologetically misogynistic. She believes it is important one does not give misogynists a platform and implicit approbation. Thus she finds hosting Bob abhorrent, and is dismayed that a group at her university is planning to do just this. She approaches this group, making clear her objections and stating her intention to, if this goes ahead, to (e.g.) protest this event, stridently criticise the group in the student paper for hosting him, petition the university to withdraw affiliation, and so on. 

This could be an attempt to bully (where usual game theory provides a good reason to refuse to concede anything on principle). But it also could not be: Alice may be explaining what responses she would make to protect her interests which the groups planned action would harm... (read more)

I agree with parts of this and disagree with other parts.

First off:

First, if she is acting in good faith, pre-committing to refuse any compromise for 'do not give in to bullying' reasons means one always ends up at ones respective BATNAs even if there was mutually beneficial compromises to be struck.

Definitely agree that pre-committing seems like a bad idea (as you could probably guess from my previous comment).

Second, wrongly presuming bad faith for Alice seems apt to induce her to make a symmetrical mistake presuming bad faith for you. To Alice, malice explains well why you were unwilling to even contemplate compromise, why you considered yourself obliged out of principle  to persist with actions that harm her interests, and why you call her desire to combat misogyny bullying and blackmail.

I agree with this in the abstract, but for the specifics of this particular case, do you in fact think that online mobs / cancel culture / groups who show up to protest your event without warning should be engaged with on a good faith assumption? I struggle to imagine any of these groups accepting anything other than full concession to their demands, such that you're stuck w... (read more)

I agree with this in the abstract, but for the specifics of this particular case, do you in fact think that online mobs / cancel culture / groups who show up to protest your event without warning should be engaged with on a good faith assumption? I struggle to imagine any of these groups accepting anything other than full concession to their demands, such that you're stuck with the BATNA regardless.

I think so. 

In the abstract, 'negotiating via ultimatum' (e.g. "you must cancel the talk, or I will do this") does not mean one is acting in bad faith. Alice may foresee there is no bargaining frontier, but is informing you what your BATNA looks like and gives you the opportunity to consider whether 'giving in' is nonetheless better for you (this may not be very 'nice', but it isn't 'blackmail'). A lot turns on whether her 'or else' is plausibly recommended by the lights of her interests (e.g. she would do these things if we had already held the event/she believed our pre-commitment to do so) or she is threatening spiteful actions where their primary value is her hope they alter our behaviour (e.g. she would at least privately wish she didn't have to 'follow through' if we def... (read more)

Yeah, I think I agree with everything you're saying. I think we were probably thinking of different aspects of the situation -- I'm imagining the sorts of crusades that were given as examples in the OP (for which a good faith assumption seems straightforwardly wrong, and a bad faith assumption seems straightforwardly correct), whereas you're imagining other situations like a university withdrawing affiliation (where it seems far more murky and hard to label as good or bad faith).

Also, I realize this wasn't clear before, but I emphatically don't think that making threats is necessarily immoral or even bad; it depends on the context (as you've been elucidating).

6kokotajlod3moI think I agree with you except for your example. I'm not sure, but it seems plausible to me that in many cases the bullied kid doing X is a bad idea. It seems like it will encourage the bullies to ask for Y and Z later.
-3MichaelStJules3moIn this case, AFAIK, no one in particular was making a threat yet. So, instead, not canceling the event is exposing yourself to a potential threat and the loss (whether you submit or not, or even retaliate) that would result. Avoiding the threat in the first place to avoid its costs is a reason to cancel the event. Canceling is like hiring bodyguards for the president and transporting them in an armoured vehicle, instead of leaving them exposed to attacks and then retaliating afterwards if they are attacked.

No it's not! Avoiding the action because you know you'll be threatened until you change course is the same as submitting to the threat.

3MichaelStJules3mo(When I write "explicit threat(s)" below, I'm mostly thinking demands from outsiders to cancel the event and risks of EA Munich or its organizers being cancelled or explicit threats from outsiders to cancel EA Munich without necessarily following through.) Abstractly, sure, the game theory is similar, since cancelling is also a cost, but I think the actual payoffs/costs can be very different, as you may be exposing yourself to more risk, and being explicitly threatened at all can incur additional (net) costs beyond the cost of cancellation. Also, if we were talking about not planning the event in the first place (that's another way to avoid the action, although that's not what happened here), it'll go unnoticed, so you wouldn't be known as someone who submits to threats to make yourself a target for more. A group won't likely be known for not inviting certain controversial speakers in the first place. I think in this case, we can say the game theory is pretty different due to asymmetric information. Cancelling early can also reduce the perception of submission to others who would make threats compared to cancelling after explicit threats, since explicit threats bring attention with them. As I wrote, there are costs that come from being threatened that are additional to just (the costs of) cancelling the event that you can avoid if you're never explicitly threatened in the first place. It's easier to avoid negative perceptions (like being known as “the group that invited Peter Singer”, as Julia mentioned) if you didn't plan the event in the first place or cancelled early before any threat was made (and even if no explicit threat was made at all). Once a threat is actually made, negative perceptions are more likely to result even if you submit, since threats bring negative perceptions with them. Cancelling after being threatened might seem like giving an apology after being caught, so might not appear genuine or the cancellation will just be less memorable than t
2MichaelStJules3moI'm assuming you're referring to my analogy with protecting the president, rather than my claim "Avoiding the threat in the first place to avoid its costs is a reason to cancel the event", which seems obvious given the risk that they will follow through on the threat (although you may have stronger reasons in the opposite direction.) Protecting the president has costs and is avoiding the action of letting the president go unprotected, which you would prefer if there were no threats or risks of threats. How does "Avoiding the action because you know you'll be threatened until you change course is the same as submitting to the threat" apply to cancelling but not this? I guess you can look at bodyguards as both preventative and retaliatory (they'll kill attackers), but armoured vehicles seem purely preventative. EDIT: One possible difference from purely strategic threats is that the people threatening to cancel you (get you fired, ruin your reputation, etc., which you don't have much control over) might actually value both making and following through on their threats to cancel as good things, rather than see following through as a necessary but unfortunate cost to make their future threats more persuasive. What do they want more, to cancel problematic people (to serve justice and/or signal virtue), or for there to be fewer problematic people? If the former, they may just be looking for appropriate targets to cancel and excuses to cancel them, so you'd mark yourself as a target by appearing problematic to them. I'm not sure this is that different from protecting the president, though, since some also just value causing harm to the president and the country.

[Epistemic status: I find the comments here to be one-sided, so I’m mostly filling in some of the missing counterarguments. But I feel strong cognitive dissonance over this topic.]

I’m worried about these developments because of the social filtering and dividing effect that controversy-seeking speakers have and because of the opposition to EA that they can create.

Clarification 1: Note that the Munich group was not worried that their particular talk might harm gender equality but that this idea of Hanson’s might have that effect if it becomes much more popular, and that they don’t want to contribute to that. My worries are in a similar vein. The most likely effect of any individual endorsement, invitation, or talk will likely be small, but I think the expected effect is much more worrying and driven by accumulation and tail risks.

Clarification 2: I’m not concerned with truth-seeking but with controversy-seeking (edit: a nice step-by-step guide). In some cases it’s hard to tell whether someone has a lot of heterodox ideas and lacks a bit in eloquence and so often ruffles feathers, or whether the person has all those heterodox ideas but is part... (read more)

I think I have a different view on the purpose of local group events than Larks. They're not primarily about like exploring the outer edges of knowledge, breaking new intellectual ground, discovering cause x, etc.

They're primarily about attracting people to effective altruism. They're about recruitment, persuasion, raising awareness and interest, starting people on the funnel, deepening engagement etc etc.

So its good not to have a speaker at your event who is going to repel the people you want to attract.

As somebody currently involved in a university group, I am extremely sympathetic towards the EA Munich group, even though they might have made a mistake here. There is a huge amount of pressure to avoid controversial topics/speakers, and it seems like they did not have a lot of time to make a decision in light of new evidence. I have hosted Peter Singer for multiple events (and am glad to have done so), but it has led to multiple uncomfortable confrontations that the average student group (e.g., knitting society) just does not have to deal with.

This highlights why Larks' post is so important. When groups face decisions about when to carry out or cancel an event, having an explicit framework for this decision making would be incredibly helpful. I'm very glad to see Julia Wise/CEA engage with this post, as I think it would be helpful for both CEA and local groups to decide at the beginning of term/before inviting speakers what qualifies people to be speakers.

The main (in my opinion, reasonable) principles elucidated in this post as I read it are:

1. Openness to unusual ideas is one of the guiding principles of Effective Altruism; groups should uphold and promote this.

2. Fu... (read more)

3Linch3moFor this and also Robert Wiblin's comment [https://forum.effectivealtruism.org/posts/zYNDJxDm4tWgw8AWn/avoiding-munich-s-mistakes-advice-for-cea-and-local-groups?commentId=toeLLRGbx2nPmBLsC] , I'm interested in whether unrepentant opponents of scientific replication [https://statmodeling.stat.columbia.edu/2016/09/21/what-has-happened-down-here-is-the-winds-have-changed/] should be considered beyond the pale in EA circles. It's not a central problem in most people's minds, but a) it's uncontroversially bad in our circles and b) EAs have a stronger case for considering denial of truth very bad than other groups. This is arguably not a hypothetical example (note that I do not have an opinion on the original research). EDIT: Removed concrete examples since they might be a distraction.

I would actually be really interested in talking to someone like Baumeister at an event, or ideally someone a bit more careful. I do think I would be somewhat unhappy to see them given just a talk with Q&A, with no natural place to provide pushback and followup discussion, but if someone were to organize an event with Baumeister debating some EA with opinions on scientific methodology, I would love to attend that.

4Linch3moI think that's roughly my position as well.
3Abby Hoskin3moSame. Especially agree that the format of the event needs to be structured so that ideas are not presented as facts, but are instead open to (lots of public) criticism.

I think this post could have profited from explaining the word "deplatforming" as in the sentence "Recently, EA Munich decided to deplatform Robin Hanson" as described in "3 suggestions about jargon in EA".

As one of the organisers of EA Munich it would be helpful to know more clearly what is meant by this as I could read it as us trying to "shut down" a speaker. It could also just be a synonym of "disinvite". I think especially in criticizing members of the community we should be as precise as possible.

Larks was so kind to share this article with us before posting and I pointed out this objection as my personal opinion in my reply to him.

This is a really challenging situation - I could honestly see myself leaning either way on this kind of scenario. I used to lean a lot more towards saying whatever I thought was true and ignoring the consequences, but lately I've been thinking that it's important to pick your battles.

I think the key sentence is this one - "On many subjects EAs rightfully attempt to adopt a nuanced opinion, carefully and neutrally comparing the pros and cons, and only in the conclusion adopting a tentative, highly hedged, extremely provisional stance. Alas, this is not such a subject."

What seems more important to me is not necessarily these kinds of edge cases, but that we talk openly about the threat potentially posed. Replacing the talk with a discussion about cancel culture instead seems like it could have been a brilliant Jiu Jitsu move. I'm actually much more worried about what's been going on with ACE than anything else.

By the title, I thought this was going to be a discussion of the dangers of appeasing genocidal dictators (e.g. https://www.ynetnews.com/articles/0,7340,L-3476200,00.html) ... clearly I was wrong!

(FWIW, I had a similar reaction. Like, it was quite clear to me what the actual topic of the post was going to be, but I was wondering whether the author was making a deliberate reference to highlight how bad they think the issue is. I was also wondering if the author was trying to sort of lead by example since comparisons to Nazi-related issues are very taboo in mainstream German discourse. Overall I figured that it's probably unintentional.)

[minor, petty, focussing directly on the proposed subject point]

In this discussion, many people have described the subject of the talk as "tort law reform". This risks sounding technocratic or minor.

The actual subject (see video) is a libertarian proposal to replace the entirety of the criminal law systen with a private, corporate system with far fewer limits on torture and constitutional rights. While neglected, this proposal is unimportant (and worse, actively harmful) and completely intractable.

The 17 people who were interested in attending didn't miss out on hearing about the next great cause X.

This is quite a long article so forgive me if I've missed it, but it seems like you're arguing that someone's general character - for example, whether they have a history of embezzling money or using racial slurs - shouldn't affect whether or not we invite them to speak at EA events. Whether or not we invite them should only depends on the quality of their ideas, not their reputation or past harmful actions. Is that what you're saying?

I cannot find any section of this article that sounds like this hypothesis, so I am pretty confident the answer is that no, that is not what the article says.  The article responds relatively directly to this: 

Of course, being a prolific producer of premium prioritisation posts doesn’t mean we should give someone a free pass for behaving immorally. For all that EAs are consequentialists, I don’t think we should ignore wrongdoing ‘for the greater good’. We can, I hope, defend the good without giving carte blanche to the bad, even when both exist within the same person. 

Thanks Oli. So I guess this article is arguing that EA Munich was either mistaken about Robin Hanson's character or they were prioritizing reputation over character?

I find this discussion very uncomfortable because I really don't like publicly saying "I have concerns about the impact an individual has on this community" - I prefer that individual groups like EA Munich make the decision on their own and as discreetly as possible - but it seems the only way they could defend themselves is to publicly state everything they dislike about Robin Hanson. I know they've said a couple things already but I don't love that we're encouraging a continued public prosecution and defense of Robin Hanson's character.

I read this piece as proposing a stance towards a social dynamic ("how EA should orient to cancel culture"), rather than continuing litigation of anyone's character.

Judgments about someone's character are, unfortunately, extremely tribal. Different political tribes have wildly different standards for what counts as good character and what counts as mere eccentricity. In many cases one tribe's virtue is another tribe's vice.

In light of this, I think we should view with suspicion the argument that it's OK to cancel someone because they have bad character. Yes, some people really do have bad character. But cancel culture often targets people who have excellent character (this is something we all can agree on, because cancel culture isn't unique to any one tribe; for examples of people with excellent character getting cancelled, just look at what the other tribe is doing!) so we should keep this sort of rationale-for-cancellation on a tight leash.

Here is a related argument someone might make, which I bring up as an analogy to illustrate my point:

Argument: Some ideas are true, others are false. The false ideas often lead to lots of harm, and spreading false ideas therefore often leads to lots of harm. Thus, when we consider whether to invite people to events, we shouldn't invite people insofar as we think they might sp... (read more)

EDIT: I plausibly misunderstood kokotajlod, see his reply.

I think there's a dangerous rhetorical slip when we construe "do not invite someone to [speak at] events" as "cancel culture."

Judgments about someone's character are, unfortunately, extremely tribal. Different political tribes have wildly different standards for what counts as good character and what counts as mere eccentricity. In many cases one tribe's virtue is another tribe's vice.
In light of this, I think we should view with suspicion the argument that it's OK to cancel someone because they have bad character.

I think this is one of the things that sounds really good in the abstract, but in practice not the practical way to think about how to do local group organizing. If I think about people who I was part of decisions of banning/softbanning from our meetups, I definitely don't think "in many cases one tribe's virtue is another tribe's vice" feels like a particularly appealing abstract argument relative to more concrete felt sense of "this person negatively impacts the experience of others at the meetup much more than they plausibly derive value... (read more)

I'm not construing "do not invite someone to speak at events" as cancel culture.

This was an invite-then-caving-to-pressure-to-disinvite. And it's not just any old pressure, it's a particular sort of political tribal pressure. It's one faction in the culture war trying to have its way with us. Caving in to specifically this sort of pressure is what I think of as adopting cancel culture.

Got it, I must have misunderstood you! I think it's a little difficult for me to understand how much people were talking about the general principles vs the specific example in Munich, and/or how much they believe the Munich example generalizes.

I think this discussion can benefit from more rigor, though it's unclear how to advance it in practice.

Yeah, I wasn't super clear, sorry. I think I basically agree with you that communities can and should have higher standards than society at large, and that communities can and should be allowed to set their own standards to some extent. And in particular I think that insofar as we think someone has bad character, that's a decently good reason not to invite them to things. It's just that I don't think that's the most accurate description of what happened at Munich, or what's happening with cancel culture more generally -- I think it's more like an excuse, rationalization, or cover story for what's really happening, which is that a political tribe is using bullying to get us to conform to their ideology. As a mildly costly signal of my sincerity here, I'll say this: I personally am not a huge fan of Robin Hanson and if I was having a birthday party or something and a friend of his was there and wanted to bring him along, I'd probably say no. This is so even though I respect him quite a lot as an intellectual.

I should also flag that I'm still confused about the best way to characterize what's going on. I do think there are p... (read more)

This study looked at nine countries and found that polarisation had decreased in five. The US was an outlier, having seen the largest increase in polarisation. That may suggest that American polarisation is due to US-specific factors, rather than universal technological trends.

Here are some studies suggesting the prevalence of technology-driven echo chambers and filter bubbles may be exaggerated.

3kokotajlod3moThanks! This is good news; will go look at those studies...
3RyanCarey3moInteresting that one of the two main hypotheses advanced in that paper is that media is influencing public opinion, but the media is not the internet, but TV! (The other hypothesis is "party sorting", wherein people move to parties that align more in ideology and social identity.) Perhaps campaigning for more money to PBS or somehow countering Fox and MSNBC could be really important for US-democracy. Also, if TV has been so influential, it also suggests that even if online media isn't yet influential on the population-scale, it may be influential for smaller groups of people, and that it will be extremely influential in the future.
5Stefan_Schubert3moSome argue, however, that partisan TV and radio was helped by the abolition of the FCC fairness doctrine in 1987 [https://en.wikipedia.org/wiki/FCC_fairness_doctrine]. That amounts to saying that polarisation was driven at least partly by legal changes rather than by technological innovations. Obviously media influences public opinion. But the question is whether specific media technologies (e.g. social media vs TV vs radio vs newspapers) cause more or less polarisation, fake news, partisanship, filter bubbles, and so on. That's a difficult empirical question, since all those things can no doubt be mediated to some degree through each of these media technologies.
3Linch3moThis seems like an interesting line of reasoning, and I'd maybe be excited to see more strategic thinking around this. Might eventually turn out to be pointless and/or futile, of course.
1kokotajlod3moI agree! I'd love to see more research into this stuff. In my relevant pre-agi possibilities doc [https://aiimpacts.org/relevant-pre-agi-possibilities/] I call this "Deterioration of collective epistemology." I intend to write a blog post about a related thing (Persuasion Tools) soon.

Thx for the long writeup. FWIW I will share some of my own impressions.

Robin's one of the most generative and influential thinkers I know. He has consistently produced fascinating ideas and contributed to a lot of the core debates in EA, like giving now vs later, AI takeoff, prediction markets, great filter, and so on. His comments regarding common discussion of inequality are all of a kind with the whole of his 'elephant in the brain work', noticing weird potential hypocrisies in others. I don't know how to easily summarize the level of his intellectual i... (read more)

Any discussion of the Munich cancellation as a potential indicator of "norms" should probably note that there are hundreds of talks by interesting thinkers each year at EA conferences/meetups around the world. At least, people I'd consider interesting, even if they don't come into conflict with social norms as regularly as Robin.

On a graph of "controversial x connection to EA," Robin is in the top corner (that is, I can't think of anyone who is both at least as controversial and at least as connected to EA,  other than maybe Peter Singer). So all these other talks may not say much about our "norm" for handling controversial speakers. But based on the organizers I know, I'd be surprised if most other EA groups (especially the bigger/more experienced ones) would have disinvited Robin.

In terms of your own feelings about contributing/collaborating in EA, do you think sentiments like those of the Munich group are common? It seems like their decision was widely criticized by lots of people in EA (even those who, like me, defended their right to make the decision/empathized with their plight while saying it was the wrong move), and supported by very few. If anything, I updated from this incident in the direction of "wow, EA people are even more opposed to 'cancel culture' than I expected."

For all that EAs are consequentialists, I don’t think we should ignore wrongdoing ‘for the greater good’. We can, I hope, defend the good without giving carte blanche to the bad, even when both exist within the same person.

We certainly shouldn't 'ignore' or give 'carte blanche' to the bad in a person, but I don't think that necessarily means we have to cancel them.

I'm not saying that there shouldn't be occasions where we do in fact cancel someone on account of their character, but as someone who ... (read more)

Minor comment regarding the case of Greg Patton: As someone who heard about the story in early September and was shocked at the fallout, it was heartening to read the aftermath in https://www.lamag.com/citythinkblog/usc-professor-slur/ and https://poetsandquants.com/2020/09/26/usc-marshall-finds-students-were-sincere-but-prof-did-no-wrong-in-racial-flap/ and see that the university eventually “concluded there was no ill intent on Patton’s part and that ‘the use of the Mandarin term had a legitimate pedagogical purpose.’”

I edited over my original post – it was too unfair to Larks and did not convey the  message I intended. 

In short I felt that  this was good post well considered and well written. That said:

  • I didn’t find the case made in this post very convincing, I think it defines "cancel culture" as bad rather than proves it so (it could have used a more neutral term like "boycott"). I also think the German cultural context (for EA Munich) might be  very different from the US cultural context (as per the examples), and I was not general convinced
... (read more)
[This comment is no longer endorsed by its author]Reply

[EDIT 2020-11-10: I wrote this in response to weeatquince's original comment; it doesn't apply nearly so strongly to the current version.]

It's pretty clearly false that cancel culture is a term used only on the right. I've seen plenty of centre and centre-left people use it – it's a term that resonates with many people. Most of the people I can think of who speak out most frequently against cancel culture are not conservatives. (That's anecdote, but so is your claim that the term is mainly used on the right.)

Of course the people actively engaged in the thing don't like the term, because it suggests that the thing they're doing is bad. But this is a problem encountered in any situation where someone thinks someone else is doing something that is bad. If you forbid even giving the bad thing a name, you quite effectively prevent organised opposition to it.

Whatever "cancel culture" is when you taboo those words, it isn't just boycotting organisations you disagree with – it carries a connotation of actively going after individuals. I agree that the evidence that this is a serious problem mostly takes the (somewhat shaky) form of a collection of examples, but given the nature of the thing I'm not sure how you would go about collecting more systematic evidence. What evidence would convince you that this is actually something to worry about?

3Tsunayoshi2moWill, you are right that boycotting is not the right term for the phenomenon at hand. In addition to the reason you gave, a cancellation campaign mostly involves pressuring other organizations or people to boycott somebody. Plain old boycotting is one personal's decision to not attend a talk, cancelling is demanding to stop the talk from even happening. However, I think there is some truth to the point that cancel culture is not the most productive term when used in discussions over whether it is actually a bad thing, precisely because as you say it suggests that people engaging in it are doing something wrong and thus begs the question. For a somewhat symmetrical situation, consider proponents of cancel culture starting a discussion over "Should Organization A be a platform for Person B's harmful views?".
2willbradshaw2moYeah, I'm sympathetic to this, and I accept the symmetry you suggest. I'm not sure to what extent it applies to this post, though.
4weeatquince15dUpon reflection I think that in my initial response to this I post was applying a UK lens to a US author. I think the culture war dynamics (such as cancel culture) in the USA are not conducive to constructive political dialogue (agree with Larks on that). Luckily this has not seeped through to UK politics very much at least so far, but it is something I worry about. I see articles in the UK (on the right) making out that cancel culture (etc) is a problem, often with examples from the states. I expect (although this is not a topic I think much about) that articles of that type are unhelpfully fanning the culture war flames more than quelling them. As such I had a knee jerk reaction to this post and put it in the same bucket as such articles. I think I was applying a UK lens to a US author, without thinking if it applied. That said I still think that Larks is (similarly) unfairly applying a US lens and US examples to a German situation without making a good case that what they says applies in the German cultural context. As such I think he may well be being too harsh on EA Munich.
2[comment deleted]15d

Good point calling out EA Munich's citing of the Slate article. We should have outright rejected  their writeup so long as it contained this citation.