Eliezer Yudkowsky has an excellent post on "Evaporative Cooling of Group Beliefs".

 https://www.lesswrong.com/posts/ZQG9cwKbct2LtmL3p/evaporative-cooling-of-group-beliefs

 

Essentially, when a group goes through a crisis, those who hold the group's beliefs least strongly leave, and those who hold the group's beliefs most strongly stay. 

This might leave the remaining group less able to identify weaknesses within group beliefs or course-correct, or "steer".

 

The FTX collapse, bad press and bad optics of the Whytham Abbey purchase probably mean that this is happening in EA right now. 

I'm not really sure what to do about this, but one suggestion might be for community building to move away from the model of trying to produce highly-engaged EAs, and switch to trying to produce moderately-engaged EAs, who might be better placed to offer helpful criticisms and help steer the movement towards doing the most good. 

Comments30
Sorted by Click to highlight new comments since: Today at 9:32 AM

While anecdote is not data, this is my first post in this forum, as someone who does not self identify as EA (though my friend assures me my views are not exclusive of being EA) and has many problems with much of the EA culture.

But I find the criticism of EA based on SBF incredibly annoying and shallow. Like giving up vegetarianism because of Hitler.

So, I didn't evaporate, for what it's worth!

I'm in favour of the  movement continuing its focus on producing highly-engaged EA's, but also iterating on the red-teaming competition.

I guess I'm more skeptical of this becoming a problem then you are. We've seen a lot, almost non-stop criticism on the EA forum after the FTX collapse following on a huge increase in criticism due to the red-teaming competition, many of which was highly upvoted.

So I don't know, I hardly feel like we have a shortage of criticism at the moment. If anything, I'm starting to worry we're getting too distracted.

All that said, perhaps you feel that I'm focusing too much on the short-term rather than the long?

Arepo
1y46
22
3

I would say there's been a tonne of criticism, but not a lot of indication that the main players are updating their behaviour based on any of it.

I don’t quite know how I feel about this perspective. On one hand, everyone has ways to improve and so if you aren’t finding them you probably aren’t looking hard enough. On the other hand, just because X number of people say something, it doesn’t mean that they are correct.

What are the changes that you think should be made that have the strongest case?

I've written a bunch of stuff on this recently, so in that sense I'm biased. But my suggestions have generally been:

  • More transparency from core EA orgs (Givewell seem to set a good benchmark here, and even moreso the charities they recommend)
  • More division of responsibility among EA orgs (ie more orgs doing fewer things each) - especially (but not exclusively) having separate orgs for EA PR and special ops
  • More carrots and/or more sticks to incentivise staff at EA orgs to perform their jobs adequately
  • Less long-term reliance on heuristic reasoning (eg ITN, orthogonality thesis, existential risk, single-author assessments of particular events)

What are the changes that you think should be made that have the strongest case?

  1. Next red-teaming competition shall include a forecasting contest: "What is the worst thing to happen to EA in 2023?" First, two winners will be selected for "best entries ex ante". Then, in January, we see if anyone actually predicted the worst hazar that happened.
  2. Give this person a prize.

If it was my choice, I'd likely give it a small prize, but not a large one as my perspective is that it is only vaguely in the general vicinity of what happened.

We've seen a lot, almost non-stop criticism on the EA forum after the FTX collapse following on a huge increase in criticism due to the red-teaming competition, many of which was highly upvoted.

So I don't know, I hardly feel like we have a shortage of criticism at the moment.

 

Agreed.

While much of the FTX criticism/discussion is justified, and the red-teaming competition seems like a (mostly) valuable and healthy undertaking, what I find so motivating about EA is the combination of rigorous criticism alongside  the examples of hugely positive impact that can be (and already have been!) achieved. If we forget to at least occasionally celebrate the latter, EA will become too miserable for anyone to want to get involved with.

Strong agree. I've been part of other communities/projects that withered away in this way.

Do you have examples/links?

I don't think red teaming is a good replacement for the kind of diversity of perspectives and approaches that having more moderately involved EAs would bring. Being a highly involved EA takes a lot time and mental resources, and I would expect mideratley involved EAs to be able to provide this kind of diversity simply because they will have more engagement with non-EA things. They will also be less enmeshed with EA as an identity so will presumably have a more disinterested approach, which I think will valuable.

I also think they would be less effected by the perverse incentives that are in place for highly engaged EAs when it comes to criticism and new ideas - currently, both the whay weighted voting on the Forum works and funding environment where, based in the writeups for EA Funds, having made a good impression on a grantmaker seems to be a non-trivial factor in getting funded disincentivizes criticisms that might not go over well, and having more people who are less concerned with their reputation and standing in the EA community would be a good way to counteract this.

I guess my perspective is that even if you’re focusing heavily on highly engaged EAs, you tend to automatically create a lot (probably even more) moderately engaged EAs, so I don’t buy the claim that we’ll have a shortage of such people to create criticism. This is based on creating highly engaged members being hard.

I do think that you have a point, but my (admittedly somewhat limited) engagement with community builders over the past year makes me believe that the goal of highly-engaged EAs creates suboptimal conditions for this to actually happen. This is mostly because community building, particularly on campuses, seems very focused on producing highly involved EAs who are willing to make career choices and major life decisions based on their engagement, which has an unfortunate by-product of not giving less involved EAs a meaningful mechanism to engage with the community. I don't think this necessarily means these people will not have those EA principles somewhere in their minds while doing things like voting or making decisions about charitable giving, but I wouldn't expect to see most of them here on the Forum or engaging with people at a EAGx.

I think that there should be some very easy fixes to this in terms of how exactly you do community building, but in the meanwhile I understand how somebody could look at the EA community and feel they don't really have a place in it or that they would have to devote an immense amount of time and resources before having a say - I've been involved since 2016 and still feel that way sometimes. 

Sorry to hear that you feel that way. What kinds of things do you think are missing?

Sorry, I spaced on this comment and am replying quite late.

In terms of what I think is missing, intellectually, the thing that stands out to me the most is a lack of rights-based perspectives. I think this is a good and necessary countervailing normative framework for utilitarianism. I didn't think was necessary when EA was mostly about charity evaluation and charitable donations to mostly global health interventions, but I do think it's very necessary now that there is a  burgeoning tendency towards policy and even political campaigns, particularly in the context of longtermism. EA has traditionally stayed away from areas arounds rights advocacy, and for good reason, but I think it's vital if the movement is going to become more involved in policy and politics.  I think it's also important to avoid some of the more repugnant logical extensions of longtermism when it comes to tradeoffs between current and future people in intervention design and prioritization. 

I'd like to see more discussions around philanthropy and state vs. non-state provision for need satisfiers, particularly in the developing world. I think this is also increasingly important as EA tends more towards a policy and politics direction and ideas like charter cities and fields AI governance are cropping up. I think it was fair enough to handwave objections to the unintended effects of "Western" charities when EA was mainly focused on directing charitable donations to where they would be most effective, but at this scale and within this new framework, I think acting more deliberately and thinking seriously about the meta-level (hopefully with the aid of thinkers and studies from outside EA - I've seen the same few books and SSC or ACX posts mentioned whenever these issues come up and there is an enormous wealth of higher quality, more rigorous extant research on the topic that can be helpful, as well as academic who have devoted their career to studying these things) will be extremely helpful. 

I also think that there's probably a lot missing in terms of perspectives from communities that EA is trying to target, and from people who work in the fields of charitable giving, impact assessment, and EA-adjacent areas who are not EAs. There seems to be a bit of an uptick and increased interest in these perspectives lately, but there's probably a need for direct solicitation to get these voices more visible in the community, and I'm not sure how much is actually being done in that regard. 

On a more meta-level, a significant reason I feel that way is to with discussion norms, particularly on the Forum. A lot of discussions are very jargon heavy (and I'd argue that a lot of the jargon is pointless and borders on affectation, and I say this as someone who is involved in academia which is ridiculously jargon-ified, but that's obviously a personal observation), and my impression is that comments can be explicitly or implicitly hostile to posts that aren't presented with an air of impartiality and a certain style of argumentation, with highly coted comments addressing the tone and style over the substance. I understand that every space has it's own idiosyncratic tendencies, and I don't intend to argue that EA needs to change this on the Forum or in person, but as someone who has a lot of pressure on my time (as I assume many others do), writing a post in line with some controversial-ish stances that conforms to the Forum style is too high cost and too little reward for me. I really don't think that me not having the time or urge to write up my ideas is a huge loss for the EA community, but I imagine there are some people who have great and innovative ideas who are being lost in this dynamic where the Forum is sometimes treated like an academic journal rather than an online forum. I imagine this effect is particularly strong for people who don't have direct EA involvement but work in EA-adjacent fields, whose insight could be seriosuly beneficial. 

Again, sorry for the late and unnecessarily long and not very well written reply. 

Thanks, that was interesting!

Yep, mostly agree that there is a good amounts of criticism around at the moment but this will probably dry up in a few months. I like the idea of iterating on the red-teaming contest.

Thanks for the summary and repost. I do think that this saga also has lessons for the EA community. I have seen many incidences whereby we overemphasise  EA alignment over subject matter expertise , especially when subject expertise is more practical and mission critical, for example in operation and risk management. This supports your comment on ‘This might leave the remaining group less able to identify weaknesses within group beliefs or course-correct, or "steer". 

The framing of trying to "produce highly-engaged EAs" / "produce moderately-engaged EAs" is very offputting to me. 

Good community-building would get more people who can engage with EA ideas while thinking for themselves and criticising things that seem wrong to them, and this seems like a different axis than "level of engagement". 

Thanks for the post, but I strongly disagree that this is the problem we're going through. Here are some things I think might be relevant that are not accounted for in this diagnosis:
First, there are some strong divides inside the movement (or among people who identify as EAs): longtermists vs. people focused on global poverty, wild animal suffering v. effective environmentalists, opinions on climate change as a GCR, etc.
Second, I don't think the problem here is just about "optics"... I was imagining that, next time I tell someone (as someone told me 5y ago, thus getting me interested in effective giving) that maybe they want to reconsider donating to their Alma Mater (because donations to universities are usually not neglected) and instead use an ITN framework to evaluate causes and projects, I might heard a reply like "Oh, you mean the ITN framework consolidated by Owen Cotton-Barrat in 2014... same guy who decided to pay 15 mi GBP for a manor-not-castle conference centre in 2022." And how can I respond to that? I'm pretty confident that OCB had good reasons, but I cannot provide them; thus the other person may just add "oh I trust the dean has reasons to act that way as well." End of discussion.
Third, probably my main issue here: we are beginning to sound a bit neurotic. I'm kinda tired of reading / arguing about EA-the-community. Some months ago, people were commenting Scott Alexander on EA's "criticim fetish" - but I think the problem might be deeper: EA forum is flooding with self-reference, meta-debates about how the community is or should be. I long for those days when the forum was full of thriving intellectual discussions on cost-benefit analysis, "nuka zaria",  population ethics... you'd post a high-quality, well researched and informative text, and be super glad if it received 20 karma points... Now it's about the community, identity, self-care, etc. I'm not saying these things are not important - but, well, that's not why we're here, right? It's not that I don't appreciate something very well-written like We must be very clear: fraud in the service of effective altruism is unacceptable or don't think it deserves a lot of attention... but the very fact that we got to a point where quite obvious things like that have to be said - that we have to say that we are against specific types of felonies -  and argued for aloud, and that it gets 17x  more attention than, e.g., a relevant discussion on altruism and development by David Nash... I don't know how to conclude this sentence, sorry.
And ofc I realize my comment is another instance of this... it reminds me one of those horrible "relationship arguments" where a couple starts arguing about something and then the very relationship becomes the main topic - and they just can't conclude the discussion in a satisfying way.

bad optics of the Whytham Abbey purchase

It doesn't feel to me that the discussion in that thread is one-sided enough to warrant this phrasing/narrative. 

[anonymous]1y20
16
0

The discussion on other social media like Reddit and Twitter seems significantly more negative than the forum discussion. I've seen a few people write off Wytham Abbey criticism as coming from "haters", but in my view the negative comments on the subreddit are mostly coming from people sympathetic to EA who thought EA was about very efficient, evidence-backed, reasonably transparent charity, and are shocked to find out that the "Center for Effective Altruism" doesn't hold these as fundamental values. 

Thank you for this post, I think it's important. I completely agree with your suggestion to focus on more moderately engaged EAs, but not so much with the impetus for it being "evaporative cooling". 

Essentially, when a group goes through a crisis, those who hold the group's beliefs least strongly leave, and those who hold the group's beliefs most strongly stay. 

I don't think that what will determine leaving or staying in response to recent events will be the degree to which people hold EA beliefs. Based on recent discussions on the Forum, I think those most likely to leave the community as a result of recent events would pe people who 

  • care very deeply about transparency and accountability in organizations in general and charitable of non-governmental organizations in particular and feel like EVF and other pillar institutions and actors are not upholding these principles in a meaningful way
  • care very deeply about the standard of evidence-based  reasoning and are struggling to reconcile things like the purchase of Wyndham Abbey with this standard
  • think EAs are espousing principles and ideas that don't match up with their actions
  • are concerned that visible engagement with EA might take meaningful action or make effective career choices more difficult due to negative public perception over the longterm 
  • don't want to be associated with EA because of negative public perception and don't think it's worth it

I think that out of these, the first two and to a more limited extent the third and fourth would be more likely to strongly hold EA beliefs and the beliefs themselves would be the reason they leave the community. You could argue that the fifth in particular is an example of evaporative cooling, and I would probably agree, but I also think that the other groups deserve more attention in this discussion and the tone of less dedicated rats leaving a sinking ship that I'm getting from the evaporative cooling framing seems counterproductive - I absolutely do not think that that is what you mean or want to indicate, but I feel like it's latent in the framing. 

I also agree very strongly that focusing on moderately involved EAs is beneficial for non-attrition related factors, most importantly for engendering intellectual diversity and allowing the movement to learn from and incorporate experiences that are outside of the non-EA ecosystem. 
 

Diagnosis sounds right to me.

Also, if you read the comments, and other content lately, you might notice that some people are downplaying the gravity of the SBF case, or remarking that it was an utterly unpredictable Black Swan. And some people are signaling virtue by praising the community, or expressing unconditional allegiance to it, instead of to its ideals and principles. I think we both agree this is wasting an opportunity to learn a lesson.
Perhaps you may describe this as a type of evaporative cooling, but it's a different way.
My suggestion right now is some sort of forecasting competition about what is the worst hazard that will come to EA in the next couple of years.

In my view, the correct strategy as a community is to organize, discredit extremists, and trying to keep growing.

I'm extremely sceptical that the evaporative cooling model applies. As far as I'm aware its only empirical support is the three anecdotes in the original post. Almost all social science is wrong so I basically don't update at all on the model's predictions. 

"Almost all social science is wrong" is a very strong assertion without evidence to back it up, and I think such over-generalizations are unhelpful.

Ok this me explaining on part of my thought process. 

I'm pretty sceptical of macroeconomic theory. I think we mostly don't understand how inflation works, DSGE models (the forefront of macroeconomic theory) mostly don't have very good predictive power, we don't really understand how economic growth works for instance. So even if someone shows me a new macro paper that proposes some new theory and attempts to empirically verify it with both micro and macro data I'll shrug and eh probably wrong. 

But this is so radically far ahead epistemically than something like the evaporative cooling model. We have thousands of datapoints for macro data and tens (?) of millions of micro data, macro models are actively used by commercial and central banks so get actual feedback on their predictions and they're still not very good. Even in microeconomics where we have really a lot of data and a lot of quasi-random variation, we got something as basic as the effect of the minimum wage on unemployment wrong until we started doing good causal inference work, despite the minium wage effect being predicted by a model which worked very well in other domains (i.e supply and demand.) 

If when I read an econ paper I need high quality casual inference to belive the theory it's offering me, and even thousands of datapoints aren't enough to properly specify test a model it's unclear to me why I should have a lower standard of evidence for other social science research. The evaporative cooling model isn't supported by

  • High-quality casual inference
  • Any regression at all 
  • More than 4 data points
  • In-depth case studies or ethnographies 
  • Regular application by practitioners who get good results using it

If I read a social science paper which didn't have any of these things I'd just ignore it - as it is I mostly ignore anything that doesn't have some combination of high-quality causal inference or a large literature of low-medium quality causal inference and observational studies reporting similar effects. 

This is like a very hard version of this take - in practice because we have to make decisions in the actual world I use social science with less good empirical foundations - but I just have limited trust in that sort of thing. But man even rcts sometimes don't replicate or scale. 

Thanks for clarifying further, and some of that rationale does make sense (e.g. it's important to critically look at the assumptions in models, and how data was collected).

I still think your conclusion/dismissal is too strong, particularly given social science is very broad (much more so than the economics examples given here), some things are inherently harder to model accurately than others, and if experts in a given field have certain approaches the first question I would ask is 'why'.

It's better to approach these things with humility and an open mind, particularly given how important the problems are that EA is trying to tackle.

I've just commented on your EA forum post, and there's quite a lot of overlap and further comments seemed more relevant there compared to this post: https://forum.effectivealtruism.org/posts/WYktRSxq4Edw9zsH9/be-less-trusting-of-intuitive-arguments-about-social?commentId=GATZcZbh9kKSQ6QPu