Free link: https://archive.ph/CGjTz (h/t lincolnq)

The Economist recently published this criticism of EA, especially its power structures. Personal thoughts in the comments.

39

0
0

Reactions

0
0
Comments15
Sorted by Click to highlight new comments since:

Imagine thinking this is a good outcome of the "keep your mouth shut" strategy CEA recommends regarding media:

Effective altruism is not a cult. As one EA advised his peers in a forum post about how to talk to journalists: “Don’t ever say ‘People sometimes think EA is a cult, but it’s not.’ If you say something like that, the journalist will likely think this is a catchy line and print it in the article. This will give readers the impression that EA is not quite a cult, but perhaps almost.”

Effective altruism treats public engagement as yet another dire risk. Bostrom has written about “information hazards” when talking about instructions for assembling lethal weaponry, but some effective altruists now use such parlance to connote bad press. EAs speak of avoiding “reputational risks” to their movement and of making sure their “optics” are good. In its annual report in 2020, the Centre for Effective Altruism logged all 137 “PR cases” it handled that year: “We learned about 78% of interviews before they took place. The earlier we learn of an interview, the more proactive help we can give on mitigating risks.” It also noted the PR team’s progress in monitoring “risky actors”: not people whose activities might increase the existential risks to humanity, but those who might harm the movement’s standing.

Terrible look, to be honest.

Isn't it somewhat ironic though that you're caring what the Economist journalists think, and implicitly connoting that that forum post shouldn't have been made because it gave bad PR?

I just find it funny how posting something like that in a public forum will, of course, make it seen by journalists sooner or later, anyway.

It's the second bit that concerns me more because I think it's essentially a correct description of how CEA, and EAs in general (largely because of CEA's influence), view public engagement. Any interaction outside the community is seen mainly as something that should be handled through a lens of risk mitigation. The way it's phrased makes it sound like the CEA stopped 78% of 137 virus outbreaks.

Like I wrote elsewhere, I think the danger with the "don't talk to media" approach is that you get very few views into a movement, mostly from leadership, and if one of those rare appearances takes a wrong turn, there is not a plurality of other views and appearances out there to balance it.

For example, if the only people who "should" give interviews are EA leadership philosophers that are deeply into longtermism, that will make it seem like the entire EA movement is all about longtermism. This is not true.

Many of these concerns resonated with me.

A relative outsider, my understanding of EA formed around its online content, which emphasises utilitarianism and longtermism. Whenever speaking to EA's in person, I'm often surprised that these perspectives are more weakly held by community members (and leaders?) than I expected. I think there are messaging issues here. Part of the issue might be that longtermist causes are more interesting to write and talk about. We should be careful to allocate attention to  cause areas proportional to their significance.

Too much of the ecosystem feels dependent on a few grantmakers / re-granters. It concentrates too much power in relatively few people's hands. (At the same time, this seems to be a very hard problem to solve. No particular initiatives come to my mind.)

I see EA's concerns with reputational risk and optics as flaws with its overly utilitarian perspective. Manipulating the narrative has short-term reputational benefits and hidden long-term costs.

At the same time, I am sceptical of EA's ability to adequately address these issues. Such concerns have been previously raised without significant change. It feels like many of these issues have arisen due to the centralisation of power and the over-weighting of community leaders' opinions, yet simultaneously the community is sufficiently de-centralised that it's difficult to coordinate such a change.

That's interesting, I've had the exact opposite experience. I was attracted to EA for similar reasons that Zoe and Ben mention in the article, such as global poverty and health, but then found that everyone I was meeting in the EA community was working on longtermist stuff (AI alignment and safety mostly). We have discussed that perhaps since my club was at a university, it's possible that most of the university students in the club at the time were just more career aligned with longtermist stuff. I don't know how accurate that is though. 

Sadly, I agree with many of the points in this article. 

“Just as the astrologer promises us that ‘struggle is in our future’ and can therefore never be refuted, so too can the longtermist simply claim that there are a staggering number of people in the future, thus rendering any counter argument mute,” he wrote in a post on the Effective Altruism forum. This matters, Chugg told me, because “You’re starting to pull numbers out of hats, and comparing them to saving living kids from malaria.”

I've been thinking this for a long time but not been able to put together something so succinct. Personally, I will carry on championing my interpretation of EA that is to look at charity like your investments and get the best bang for your buck. Wether I'll use the term 'EA' to describe myself will depend on the next few months - if the general understanding of EA is speculative longtermism, cultish behavior, and 'ends justify the means' then I'd rather not bring it up. 

Maybe EA will split in two, one group carrying on like they are now with a focus on longtermism and another that focuses solely on real impacts that can be seen and measured within our lifetimes. Maybe it doesn't matter as long as you and I keep it in our minds when we make our donations to charity funding malaria nets saving real lives today, no matter how small that impact might be compared to SBF and the future trillions of humans at risk of AI going rogue on Mars. 

Edit: Not to say longtermism doesn't have its place, I just feel too much time is spent on these things that may never happen while real people face real issues today (or may face in the near future, like pandemic preparedness). 

[anonymous]9
1
0

I thought this was a relatively balanced piece actually as far as criticisms go. The author is clearly not a fan, but I feel like she resisted the temptation to straw-man a lot more than most critics - good on her (...or good on The Economist if this is their general style?).

I think this phrasing is unfortunate though:

Cremer and Kemp were told that they and their institutions might lose funding because of it, and were advised not to publish at all.

I imagine this will be interpreted by most readers as a threat from funders. Whereas my understanding was that this was a case of other community members looking out for Cremer and Kemp, telling them they were worried that this might happen. From their post:

These individuals—often senior scholars within the field—told us in private that they were concerned that any critique of central figures in EA would result in an inability to secure funding from EA sources, such as OpenPhilanthropy. We don't know if these concerns are warranted.

(By the way, the free link has an extra comma at the end which needs removing for the link to work.)

Thanks for clarifying this! I really had interpreted it as a threat from funders.

Other than seemingly conflating EA with utilitarianism sometimes, I thought this was quite a good piece which raises some important pain points in the movement.

Let's aim to become more transparent, less secretive, more decentralized and put into place whistle blower protections.

[anonymous]6
3
1

I found this quote to be particularly salient: “Having a handful of wealthy donors and their advisers dictate the evolution of an entire field is bad epistemics at best and corruption at worst”

Thank you for sharing!  A one sentence thought on one of the paragraphs towards the end outlined by a former EA member...

Initially, they appeared to achieve their goal: MacAskill offered to talk to Cremer. She presented him with structural reforms they could make to the community. Among other things, Cremer wanted whistleblowers to have more protection and for there to be more transparency around funding and decisions about whom to invite to conferences. MacAskill responded that he wanted to support more “critical work”. Subsequently, the movement established a criticism contest. Yet when it came to specifics such as the mechanisms for raising and distributing money, he seemed to think the current process was sufficiently rigorous. MacAskill disputes this characterisation and told me he was in favour of “increasing donor diversity”.

I could understand the pursuit for accomplishing critical work and achieve EA objectives, but a structure to safeguard the work and EA brand is vital as well. 

Do you know if there is a version that isn't paywalled? 

[comment deleted]3
3
8
Curated and popular this week
Relevant opportunities