Hide table of contents

Epistemic status: I mostly want to provide a starting point for discussion, not make any claims with high confidence.

Introduction and summary

It’s 2024. The effective altruism movement no longer exists, or is no longer doing productive work, for reasons our current selves wouldn’t endorse. What happened, and what could we have done about it in 2019?

I’m concerned not to hear this question discussed more often (though CEA briefly speculates on it here). It’s a prudent topic for a movement to be thinking about at any stage of its life cycle, but our small, young, rapidly changing community should be taking it especially seriously—it’s very hard to say right now where we’ll be in five years. I want to spur thinking on this issue by describing four plausible ways the movement could collapse or lose much of its potential for impact. This is not meant to be an exhaustive list of scenarios, nor is it an attempt to predict the future with any sort of confidence—it’s just an exploration of some of the possibilities, and what could logically lead to what.

  • Sequestration: The EAs closest to leadership become isolated from the rest of the community. They lose a source of outside feedback and a check on their epistemics, putting them at a higher risk of forming an echo chamber. Meanwhile, the rest of the movement largely dissolves.
  • Attrition: Value drift, burnout, and lifestyle changes cause EAs to drift away from the movement one by one, faster than they can be replaced. The impact of EA tapers, though some aspects of it may be preserved.
  • Dilution: The movement becomes flooded with newcomers who don’t understand EA’s core concepts and misapply or politicize the movement’s ideas. Discussion quality degrades and “effective altruism” becomes a meaningless term, making the original ideas impossible to communicate.
  • Distraction: The community becomes engrossed in concerns tangential to impact, loses sight of the object level, and veers off track of its goals. Resources are misdirected and the best talent goes elsewhere.

Below, I explore each scenario in greater detail.

Sequestration

To quote CEA’s three-factor model of community building,

Some people are likely to have a much greater impact than others. We certainly don’t think individuals with more resources matter any more as people, but we do think that helping direct their resources well has a higher expected value in terms of moving towards CEA’s ultimate goals.

However,

good community building is about inclusion, whereas good prioritization is about exclusion

and

It might be difficult in practice for us to be elitist about the value someone provides whilst being egalitarian about the value they have, even if the theoretical distinction is clear.

I don’t want to be seen as arguing for any position in the debate about whether and how much to prioritize those who appear most talented—a sufficiently nuanced writeup of my thoughts would distract from my main point here. However, I do want to highlight a possible risk of too much elitism that I haven’t really seen talked about. The terms “core” and “middle” are commonly used here, but I generally find their use conflates level of involvement or commitment with level of prominence or authority. In this post I’ll be using the following definitions:

  • Group 1 EAs are interested in effective altruism and may give effectively or attend the occasional meetup, but don’t spend much time thinking about EA or consider it a crucial part of their identities and their lives.
  • Group 2 EAs are highly dedicated to the community and its project of making the world a better place; they devour EA content online and/or regularly attend meetups. However, they are not in frequent contact with EA decision-makers.
  • Group 3 EAs are well-known community members, or those who have been identified as potentially high-impact and have prominent EAs or orgs like 80K investing in their development as effective altruists.

A sequestration collapse would occur if EA leadership stopped paying much attention to Groups 1 and 2, or became so tone-deaf about putting Group 3 first that everyone else would feel alienated and leave the movement. Without direction and support, most of Group 1 and some of Group 2 would likely give up on the idea of doing good effectively. The others might try to go it alone, or even try to found a parallel movement—but without the shared resources, coordination ability, and established networks of the original community, they would be unlikely to recapture all the impact lost in the split. Meanwhile, Group 3 would be left with little to no recruitment ability, since most Group 3 EAs pass through Groups 1 and 2 first.

Finally, Group 2 and especially Group 1 act as a bridge between Group 3 and the rest of the world, and the more grounded, less radical perspective they bring may help prevent groupthink, group polarization, and similar dynamics. Without it, Group 3 would be left dangerously isolated and more prone to epistemic errors. Overall, losing Groups 1 and 2 would curtail EA’s available resources and threaten the efficiency with which we used them—possibly forever.

Again, prioritization of promising members is a very hard needle to thread. However, EA leadership should put a great deal of thought and effort into welcoming, inclusive communication and try hard to avoid implying that certain people aren’t valuable. They should also keep an eye on the status and health of the community: if decision-makers get out of touch with the perspectives, circumstances, and problems of the majority of EAs, their best efforts at inclusivity are unlikely to succeed. Prominent EAs should strive to be accessible to members of Groups 1 and 2 and to hear out non-experts’ thoughts on important issues, especially ones concerning community health. Local group organizers should create newcomer-friendly, nonjudgmental spaces and respond to uninformed opinions with patience and respect. We can all work to uphold a culture of basic friendliness and openness to feedback.

Attrition

Over time, some EAs will inevitably lose their sense of moral urgency or stop feeling personally compelled to act against suffering. Some will overwork themselves, experience burnout, and retreat from the community. Some will find that as they grow older and move into new life stages, an altruism-focused lifestyle is no longer practical or sustainable. Some will decide they disagree with the movement’s ideals or the direction it seems to be moving in, or be drawn by certain factors but repelled by others and find that over time their aversion wins out. Each person’s path to leaving the movement will be unique and highly personal. But these one-offs will pose a serious danger to the movement if they accumulate faster than we can bring new people in.

In an attrition collapse scenario, the movement’s impact would taper slowly as people dropped out one by one. EA’s ideas may influence ex-members’ thinking over their lifetimes, and some people might continue to donate substantially to high-impact charities without necessarily following the latest research or making the community a part of their lives. Some highly active percentage of EAs would continue to pursue effective altruist goals as people bled away around them, possibly keeping some of the movement’s institutions on life support. If we managed to retain a billionaire or two, we could even continue work like the Open Philanthropy Project. But even if we did, our capacity would be greatly reduced and our fundamental ideas and aspirations would die out.

Whether and when to leave the movement is something we should all decide for ourselves, so we shouldn’t fight attrition on the level of individuals. Instead, we should shore up the movement as a whole. EA leadership should keep an eye on the size of the community and be sure to devote enough resources to recruitment to keep EAs off the endangered species list. Local group organizers can maintain welcoming environments for newcomers and make sure they’re creating a warm and supportive community that people enjoy engaging with. We should all work hard to be that community, online and in person.

Dilution

From CEA’s fidelity model:

A common concern about spreading EA ideas is that the ideas will get "diluted" over time and will come to represent something much weaker than they do currently. For example, right now when we talk about which cause areas are high impact, we mean that the area has strong arguments or evidence to support it, has a large scope is relatively neglected and is potentially solvable.

Over time we might imagine that the idea of a high impact cause comes to mean that the area has some evidence behind it and has some plausible interventions that one could perform. Thus, in the future, adherence to EA ideas might imply relatively little difference from the status quo.

I'm uncertain about whether this is a serious worry. Yet, if it is, spreading messages about EA with low fidelity would significantly exacerbate the problem. As the depth and breadth of ideas gets stripped away, we should expect the ideas around EA to weaken over time which would eventually cause them to assume a form that is closer to the mainstream.

In a dilution scenario, the movement becomes flooded with newcomers who don’t understand EA’s core concepts and misapply or politicize the movement’s ideas. Discussion quality degrades and so many things fall under the banner of “effective altruism” that it becomes meaningless to talk about. “It’s effective!” starts to look like “It’s healthy!” or “It’s environmentally friendly!”: often poorly thought out or misleading. It becomes much harder to distinguish the signal from the noise. CEA uses the possibility of this scenario as an argument against “low-fidelity” outreach strategies like mass media.

I think it’s possible that EA becoming more mainstream would result in a two-way transfer of ideas. Depending on the scale and specifics of this process, the benefits of slightly improving decision-making in large swaths of society may completely swamp the effects from damage to the original movement. This seems plausible, though not necessarily probable, for global poverty reduction and animal welfare. It seems very unlikely for x-risk reduction, which may succeed or fail based on the quality of ideas of a relatively small number of people.

Could we just shrug and sneak away from the confusion to quietly pursue our original goals? Probably not. People we needed to communicate with would often misunderstand us, interpreting what we said through the lens of mainstream not-quite-EA. It would also be difficult to separate our new brand from the polluted old one, meaning the problem would likely follow us wherever we went.

Assuming we decide a dilution scenario is bad, what can we do to avoid it? As CEA emphasizes, we should make sure to communicate about the movement in high-fidelity ways. That means taking care with how we communicate about EA and avoiding the temptation to misrepresent the movement to make it easier to explain. Experienced EAs should try to be available and approachable for newcomers to correct misconceptions and explain ideas in greater depth. Outreach should focus on long-form and high-bandwidth communication like one-on-ones, and we should grow the movement carefully and intentionally to give each newcomer the chance to absorb EA ideas correctly before they go off and spread them to others.

Distraction

In this collapse scenario, EA remains an active, thriving community, but fails to direct its efforts toward actually producing impact. We’ve followed our instrumental objectives on tangents away from our terminal ones, until we’ve forgotten what we came here to do in the first place.

I’m not talking about the risk that we’ll get caught up in a promising-looking but ultimately useless project, as long as it’s a legitimate attempt to do the most good. Avoiding that is just a question of doing our jobs well. Instead, I’m pointing at something sort of like unintentionally Goodharting EA: optimizing not for the actual goal of impact, but for everything else that has built up around it—the community, the lifestyle, the vaguely related interests. Compare meta traps #2 and #4.

Here are a few examples of how a distraction collapse could manifest:

  • EA busywork:
    • We get so wrapped up in our theories that we forget to check whether work on them will ever affect reality.
    • We chase topics rather than goals. For example, after someone suggests that a hypothetical technology could be EA-relevant, we spend resources investigating whether it could work without really evaluating its importance if it did.
  • Ossification:
    • We focus so much on our current cause areas that we forget to reevaluate them and keep an eye out for better ones, missing opportunities to do the most good.
    • We do things because they’re the kinds of things EAs do without having actual routes to value in mind, and run projects that don’t have mechanisms to affect the things we say we want to change.
  • Fun shiny things:
    • Gossip about community dynamics and philosophical debate irrelevant to our decisions crowds out discussion of things like study results and crucial considerations.
    • We let the hard work of having an impact slide in favor of the social and cultural aspects of the movement, while still feeling virtuous for doing EA activities.

Distraction is, of course, a matter of degree: we’re almost certainly wasting effort on all sorts of pointless things right now. A collapse scenario would occur only if useless activities crowded out useful ones so much that we lost our potential to be a serious force driving the world toward better outcomes.

In this possible future, impact would taper gradually and subtly as more and more person-hours and funding streams were diverted to useless work. Some people would recognize the dynamic and take their talent elsewhere, worsening the problem through evaporative cooling. The version of EA that remained would still accomplish some good: I don’t think we’d completely abandon bed nets in this scenario. But the important work would be happening elsewhere, or else not happening at all.

A distraction scenario is hard to recognize and avoid. Work several steps removed from the problem is often necessary and valuable, but it can be hard to tell the useless and the useful apart: you can make up plausible indirect impact mechanisms for anything. We may want to spend more time explicitly mapping out our altruistic projects’ routes to impact. It’s probably a good habit of mind to constantly ask ourselves about the ultimate purpose of our current EA-motivated task: does it bottom out in impact, or does it not?

Conclusion

EA is carrying precious cargo: a unique, bold, and rigorous set of ideas for improving the world. I want us to pass this delicate inheritance to our future selves, our children, and their children, so they can iterate and improve on it and create the world we dream of. And I want to save a whole lot of kids from malaria as we sail along.

If the ship sinks, its cargo is lost. Social movements sail through murky waters: strategic uncertainties, scandals and infighting, and a changing zeitgeist, with unknown unknowns looming in the deep. I want EA to be robust against those challenges.

Part of this is simply movement best practices: thinking decisions through carefully, being kind to each other, creating a healthy intellectual climate. It’s also crucial to consider collapse scenarios in advance, so we can safely steer away from them.

Having done this, I have one major recommendation: beware ideological isolation. This is a risk factor for both the sequestration and distraction scenarios, as well as a barrier to good truthseeking in general. Though the community tends to appreciate the value of criticism, we still seem very much at risk of becoming an echo chamber—and to some degree certainly are one already. We tend to attract people with similar backgrounds and thinking styles, limiting the diversity of perspectives in discussions. Our ideas are complex and counterintuitive enough that anyone who takes the time to understand them probably thinks we’re onto something, meaning much of the outside criticism we receive is uninformed and shallow. It’s vital that we pursue our ideas in all the unconventional directions they take us, but at each step the movement becomes more niche and inferential distance grows.

I don’t know what to do about this problem, but I don’t think being passively open to criticism is enough to keep us safe: if we want high-quality analysis from alternate viewpoints, we have to actively seek it out.

Thanks to Vaidehi Agarwalla for the conversation that inspired this post, and to Vaidehi, Taymon Beal, Sammy Fries, lexande, Joy O’Halloran, and Peter Park for providing feedback. All of you are wonderful and amazing people and I appreciate it.

If you’d like to suggest additions to this list, please seriously consider whether talking about your collapse scenario will make it more likely to happen.

Comments33
Sorted by Click to highlight new comments since:

This is a question I consider crucial in evaluating the work of organizations, so it's sort of embarrassing I've never really tried to apply it to the community as a whole. Thanks for bringing that to light.

I think one thing uniting all your collapse scenarios is that they're gradual. I wonder how much damage could be done to EA by a relatively sudden catastrophe, or perhaps a short-ish series of catastrophes. A collapse in community trust could be a big deal: say there was a fraud or embezzlement scandal at CEA, OPP, or GiveWell. I'm not sure that would be catastrophic by itself, but perhaps if several of the organizations were damaged at once it would make people skeptical about the wisdom of reforming around any new centre, which would make it much harder to co-ordinate.

Another thing that I see as a potential risk is high-level institutions having a pattern of low-key misbehaviour that people start to see (wrongly, I hope) as an inevitable consequence of the underlying ideas. Suppose the popular perception starts to be "thinking about effectiveness in charity is all well and good, but it inevitably leads down a road of voluntary extinction / techno-utopianism / eugenics / something else low-status or bad". Depending on how bad the thing is, smart thoughtful people might start self-selecting out of the movement, and the remainder might mismanage perceptions of them even worse.

Hi Ben!

I’m not entirely sure what you mean by “high-level institutions having a pattern of low-key misbehavior”—are you talking about things like dishonesty and poor treatment of community members, or about endorsing ideas that violate most ethical frameworks?

The list of scenarios I originally brainstormed for this post did include some sudden catastrophes, but I ultimately decided to focus on these four. Here are the sudden ones anyway, in short form:

Economic recession: see this post. I think a recession could pose a real risk to the movement’s survival, since people who would have to drastically cut donations for multiple years might undergo value drift and be unlikely to re-prioritize giving in the future. Unfortunately, I’m definitely not qualified to give advice on economics or personal finance.

Scandal: I think EA is multipolar enough that a scandal involving a single org or prominent individual wouldn’t be enough to dissolve the movement, though it could definitely reduce trust, kill off that org, and cause some people to leave the community. We’ve certainly had, and survived, controversies in the past. However, I hadn’t thought about a scandal involving multiple orgs—that sounds much more dangerous, and it might be possible given how much they share information and personnel. I suppose the way to avoid this is for orgs to be as transparent as possible, which fortunately at least some of them are pretty good at.

Viral bad publicity: Of course, only a tiny percentage of content goes viral, so I think it’s very unlikely that a malicious hit piece or an incident with bad optics would become well-known enough to really hurt us. However, it’s much more possible that this could happen in a narrow domain such as a sector of academia. I’m pretty horrified at the thought of what would happen to us if it ever became embarrassing and career-thwarting to admit to being an EA.

Finally, the movement could collapse suddenly because its members all get paperclipped, but I figured I didn’t need to remind anyone of that :(

Nice post!

Re: sequestration, OpenPhil has written about the difficulty of getting honest, critical feedback as a grantmaker. This seems like something all grantmakers should keep in mind. The danger seems especially high for an organization like OpenPhil or CEA, which is grantmaking all over the EA movement with EA Grants and EA Funds. Unfortunately, some reports from ex-employees of CEA on Glassdoor give me the impression CEA is not as proactive in its self-skepticism as OpenPhil:

Not terribly open to honest self-assessment, but no more so than the average charity.

...

As another reviewer mentioned, ironically hostile to honest self-assessment, let alone internal concerns about effectiveness - I saw and heard of some people who'd got significant grief for this. Groupthink and back-patting was more rewarded.

I've also heard an additional anecdote about CEA, independent of Glassdoor, which is compatible with this impression.

The question of whether and how much to prioritize those who appear most talented is tricky. I get the impression there has been a gradual but substantial update away from mass outreach over the past few years (though some answers in Will's AMA make me wonder if he and maybe others are trying to push back against what they see as excessive "hero worship" etc.) Anyway, some thoughts on this:

  • I think it's not always obvious how much of the work attributed to one famous person should really be credited to a much larger team. For example, one friend of mine cited the massive amount of money Bill Gates made as evidence that impact is highly disproportionate. However, I would guess in many cases, successful entrepreneurs at the $100M+ scale are distinguished by their ability to identify & attract great people to work for their company. I think maybe there is some quirk of our society where we want to credit just a few individuals with an impressive accomplishment even when the "correct" assignment of credit doesn't actually follow a power law distribution. [For a concrete example where we have data available, I think claims about Wikipedia editor contributions following a power law distribution have been refuted.]

  • Even in cases where individual impact will be power law distributed, that doesn't mean we can reliably identify the people at the top of the distribution in advance. For example, this paper apparently found that work sample tests only correlated with job performance at around 0.26-0.33! (Not sure what "attenuation" means in this context.) Anyway, maybe we could do some analysis: If you have applicant pool with N applicants, and you're going to hire the top K applicants based on a work sample test which correlates with job performance at 0.3, what does K need to be for you to have a 90% chance of hiring the best applicant? (I'd actually argue that the premise of this question is flawed, because the hypothetical 10x applicant is probably going to achieve 10x performance through some creative insights which the work sample test predicts even less well, but I'd still be interested in seeing the results of the analysis. Actually, speaking of creativity, have any EA organizations experimented with using tests of creative ability in their hiring?)

  • Finally, I think it could be useful to differentiate between "elitism" and "exclusivity". For example, I once did some napkin math suggesting that less than 0.01% of the people who watch Peter Singer's TED talk later become EAs. So arguably, this is actually a pretty strong signal of dedication & willingness to take ideas seriously compared to, say, someone who was persuaded to become an EA through an element of peer pressure after several friends became interested. But the second person is probably going to better connected within EA. So if the movement becomes more "exclusive", in the sense of using someone's position in the social scene as a proxy for their importance, I suspect we'd be getting it wrong. When I think of the EAs who seem very dedicated to making an impact, people I'm excited about, they're often people who came to EA on their own and in some cases still aren't very well-connected.

I'm glad people want to look for evidence that CEA (and other orgs) is being adequately self-reflective. However, I'd like to give some additional context on Glassdoor. Of the five CEA reviews posted there:

  • Two are from people who have confused CEA with other organizations (neither of those were cited in John's comment)
  • One is fairly recent and positive (also not cited)
  • One is from September 2016, at which point only three of CEA's current staff were employed by the organization (three-and-a-half if you count Owen Cotton-Barratt, who is currently a part-time advisor to CEA).
  • One is from March 2018 -- more recent, but still representing a substantial departure from CEA's current staff list, including a different executive director. A lot can change over the course of 18 months.

I'll refrain from going into too much detail, but my experience is that CEA circa late 2019 is intensely self-reflective; I'm prompted multiple times in the average week to put serious thought into ways we can improve our processes and public communication.

my experience is that CEA circa late 2019 is intensely self-reflective; I'm prompted multiple times in the average week to put serious thought into ways we can improve our processes and public communication.

I imagine that the ex-staff who complain about them being "hostile to honest self-assessment, let alone internal concerns about effectiveness" are likely referring more to something like self-criticism, rather than simply self-reflection. Even an org or individual that was entirely cynically dedicated to maximising their prestige rather than doing good, would self-reflect about how to communicate more effectively.

It does seem a bit weird to me for an organization to claim to be self-critical but put relatively little effort into soliciting external critical feedback. Like, CEA has a budget of $5M. To my knowledge, not even 0.01% of that budget is going into cash prizes for the best arguments that CEA is on the wrong track with any of its activities. This suggests either (a) an absurd level of confidence, on the order of 99.99%, that all the knowledge + ideas CEA needs are in the heads of current employees or (b) a preference for preserving the organization's image over actual effectiveness. Not to rag on CEA specifically--just saying if an organization claims to be self-critical, maybe we should check to see if they're putting their money where their mouth is.

(One possible counterpoint is that EAs are already willing to provide external critical feedback. However, Will recently said he thought EA was suffering too much from deference/information cascades. Prizes for criticism seem like they could be an effective way to counteract that.)

Is putting some non-trivial budget into cash prizes for arguments against what you do the only way to show you're self-critical? Your statement suggests you believe something like that. But that doesn't seem the only way to show you're self-critical. I can't think of any other organisation that have ever done that, so if it is the only way to show you're self-critical, that suggests no organisation (I've heard of) is self-critical, which seems false. I wonder if you're holding CEA to a peculiarly high standard; would you expect MIRI, 80k, the Gates Foundation, Google, etc. to do the same?

I'm suggesting that the revealed preferences of most organizations, including CEA, indicate they aren't actually very self-critical. Hence the "Not to rag on CEA specifically" bit.

I think we're mostly in agreement that CEA isn't less self-critical than the average organization. Even one of the Glassdoor reviewers wrote: "Not terribly open to honest self-assessment, but no more so than the average charity." (emphasis mine) However, aarongertler's reply made it sound like he thought CEA was very self-critical... so I think it's reasonable to ask why less than 0.01% of CEA's cash budget goes to self-criticism, if someone makes that claim.

How meaningful is an organization's commitment to self-criticism, exactly? I think the fraction of their cash budget devoted to self-criticism gives us a rough upper bound.

I agree that the norm I'm implicitly promoting, that organizations should offer cash prizes for the best criticisms of what they're doing, is an unusual one. So to put my money where my mouth is, I'll offer $20 (more than 0.01% of my annual budget!) for the best arguments for why this norm should not be promoted or at least experimented with. Enter by replying to this comment. (Even if you previously appeared to express support for this idea, you're definitely still allowed to enter!) I'll judge the contest at some point between Sept 20 and the end of the month, splitting $20 among some number of entries which I will determine while judging. Please promote this contest wherever you feel is appropriate. I'll set up a reminder for myself to do judging, but I appreciate reminders from others also.

GiveWell used to solicit external feedback a fair bit years ago, but (as I understand it) stopped doing so because it found that it generally wasn't useful. Their blog post External evaluation of our research goes some way to explaining why. I could imagine a lot of their points apply to CEA too.

I think you're coming at this from a point of view of "more feedback is always better", forgetting that making feedback useful can be laborious: figuring out which parts of a piece of feedback are accurate and actionable can be at least as hard as coming up with the feedback in the first place, and while soliciting comments can give you raw material, if your bottleneck is not on raw material but instead on which of multiple competing narratives to trust, you're not necessarily gaining anything by hearing more copies of each.

Certainly you won't gain anything for free, and you may not be able to afford the non-monetary cost.

Upvoted for relevant evidence.

However, I don't think you're representing that blog post accurately. You write that Givewell "stopped [soliciting external feedback] because it found that it generally wasn't useful", but at the top of the blog post, it says Givewell stopped because "The challenges of external evaluation are significant" and "The level of in-depth scrutiny of our work has increased greatly". Later it says "We continue to believe that it is important to ensure that our work is subjected to in-depth scrutiny."

I also don't think we can generalize from Givewell to CEA easily. Compare the number of EAs who carefully read Givewell's reports (not that many?) with the number of EAs who are familiar with various aspects of CEA's work (lots). Since CEA's work is the EA community, which should expect a lot of relevant local knowledge to reside in the EA community--knowledge which CEA could try & gather in a proactive way.

Check out the "Improvements in informal evaluation" section for some of the things Givewell is experimenting with in terms of critical feedback. When I read this section, I get the impression of an organization which is eager to gather critical feedback and experiment with different means for doing so. It doesn't seem like CEA is trying as many things here as Givewell is--despite the fact that I expect external feedback would be more useful for it.

if your bottleneck is not on raw material but instead on which of multiple competing narratives to trust, you're not necessarily gaining anything by hearing more copies of each.

I would say just the opposite. If you're hearing multiple copies of a particular narrative, especially from a range of different individuals, that's evidence you should trust it.

If you're worried about feedback not being actionable, you could tell people that if they offer concrete suggestions, that will increase their chance of winning the prize.

The main barrier to self improvement isn't knowing your weaknesses, it's fixing them.

  1. I believe CEA is aware of several of its weaknesses. Publicly pointing out weaknesses they're already aware of is a waste of donors' money and critics' time. It's also a needles reputational risk.

  2. If I'm right, and CEA is already aware of its main flaws, then they should focus on finding and implementing solutions. Focusing instead on crowdsourcing more flaws won't help; it will only distract staff from implementing solutions.

These are good points, upvoted. However, I don't think they undermine the fundamental point: even if this is all true, CEA could publish a list of their known weaknesses and what they plan to do to fix them, and offer prizes for either improved understanding of their weaknesses (e.g. issues they weren't aware of), or feedback on their plans to fix them. I would guess they would get their money's worth.

Placing a bounty for writing criticisms casts doubt on whether those criticisms are actually sincere, or whether they're just bs-ing and overstating certain things and omitting other considerations to write those most compelling criticism they can. It's like reading a study written by someone with a conflict of interest – it's very easy to dismiss it out of hand. If CEA were to offer a financial incentive for critiques, then all critiques of CEA become less trustworthy. I think it would be more productive to encourage people to offer the most thoughtful suggestions on how to improve, even if that means scaling up certain things because they were successful, and not criticism per se.

Thanks for the feedback, these are points worth considering.

bs-ing and overstating certain things and omitting other considerations to write those most compelling criticism they can

Hm, my thought was that CEA would be the ones choosing the winners, and presumably CEA's definition of a "compelling" criticism could be based on how insightful or accurate CEA perceives the criticism to be rather than how negative it is.

It's like reading a study written by someone with a conflict of interest – it's very easy to dismiss it out of hand.

An alternative analogy is making sure that someone accused of a crime gets a defense lawyer. We want people who are paid to tell both sides of the story.

In any case, the point is not whether we should overall be pro/con CEA. The point is what CEA should do to improve. People could have conflicts of interest regarding specific changes they'd like to see CEA make, but the contest prize seems a bit orthogonal to those conflicts, and indeed could surface suggestions that are valuable precisely because no one currently has an incentive to make them.

If CEA were to offer a financial incentive for critiques, then all critiques of CEA become less trustworthy.

I don't see how critiques which aren't offered in the context of the contest would be affected.

I think it would be more productive to encourage people to offer the most thoughtful suggestions on how to improve, even if that means scaling up certain things because they were successful, and not criticism per se.

Maybe you're right and this is a better scheme. I guess part of my thinking was that there are social incentives which discourage criticism, and cash could counteract those, and additionally people who are pessimistic about your organization could have some of the most valuable feedback to offer, but because they're pessimistic they will by default focus on other things and might only be motivated by a cash incentive. But I don't know.

There is an established industry committed to providing criticism from outside (well, kind of): external auditors, commonly known as the Big 4. These companies are paid by usually big firms to evaluate their financial statements with regards to accuracy and unlawful activity. While these accountants are supposed to serve the shareholders of the company and the public, they are remunerated and chosen by the companies themselves, which creates an obvious incentive problem. Empirically, this has led to serious doubt about the quality of their work, even after governments had to step in because of poor audits and provide stringent legal requirements for auditors. See: https://www.economist.com/leaders/2018/05/24/reforming-the-big-four

Essentially, a similar problem would arise if CEA would pay external people to provide feedback, which is something GiveWell also ran into (from memory: the page somebody below already linked outlines that finding people who are qualified AND willing to provide free criticism is really hard). If you pay a reviewer beforehand, how do you choose a reviewer? Having such a reviewer might actually be a net negative, if it provides a false sense of security (in probabilistic terms: it would seem from the outside that estimates A and B are independent of each other, but in fact since the first evaluator chooses the second they are not). If you use a format like the current one, where everybody is free to submit criticism, but the organization itself chooses the best arguments, there is no incentive for the organization to pick the most scathing criticisms, when it could just as well pick only moderate ones. (although it is probably better to incorporate moderate criticism rather than none at all)


Even if you solve the incentive problem somehow, there is a danger to public criticism campaigns like that: that they will provide a negative impression of the organization to outside people that do not read about the positive aspects of the organization/movement. There are several reasons to consider this as a realistic danger: 1) On the internet people seem to really love reading negative pieces, they capture our interest and they are shared more often. 2)The more negative the opinion expressed, the more salient to the memory it is. 3) With EA, it's likely that this might end up being on of the first impressions people have of it.

4)All of this is what happened above with the link to the glassdoor review of CEA: we now have a discussion in this thread about the negative reviews on there, but not really of the positive ones. Previously I had no special information about whether CEA was internally open to self-criticism, but now I only have these negative reviews to go on with and I expect that in a year I will still remember them.

I realize that these points do not necessarily apply to asking for external criticism in itself, just for certain ways to go about it, but I do believe that avoiding the aforementioned problems requires clever and nontrivial design.

Thanks, interesting points!

there is no incentive for the organization to pick the most scathing criticisms, when it could just as well pick only moderate ones.

If a particular criticism gets a lot of upvotes on the forum, but CEA ignores it and doesn't give it a prize, that looks a little suspicious.

Even if you solve the incentive problem somehow, there is a danger to public criticism campaigns like that: that they will provide a negative impression of the organization to outside people that do not read about the positive aspects of the organization/movement.

You could be right. However, I haven't seen anyone get in this kind of trouble for having a "mistakes" page. It seems possible to me that these kind of measures can proactively defuse the discontent that can lead to real drama if suppressed long enough. Note that the thing that stuck in your head was not any particular criticism of CEA, but rather just the notion that criticism might be being suppressed--I wonder if that is what leads to real drama! But you could have a good point, maybe CEA is too important of an organization to be the first ones to experiment with doing this kind of thing.

Thanks to everyone who entered this contest! I decided to split the prize money evenly between the four entries. Winners, please check your private messages for payment details!

Thanks for raising these points, John! I hadn't considered the "cash prize for criticism" idea before, but it does seem like it's worth more consideration.

I agree that CEA could do better on the front of generating criticisms from outside the organization, as well as making it easier for staff to criticize leadership. This is one of the key things that we have been working to improve since I took up the Interim Executive Director role in early 2019. Back in January/February, we did a big push on this, logging around 100 hours of user interviews in a few weeks, and sending out surveys to dozens of community members for feedback. Since then, we've continued to invest in getting feedback, e.g. staff regularly talk to community members to get feedback on our projects (though I think we could do more); similarly, we reach out to donors and advisors to get feedback on how we could improve our projects; we also have various (including anonymous) mechanisms for staff to raise concerns about management decisions. Together, I think these represent more than 0.1% of CEA's staff time. None of this is to say that this is going as well as we'd like - maybe I'd say one of CEA's "known weaknesses" is that I think we could stand to do more of this.

I agree that more of this could be public and transparent also - e.g. I'm aware that our mistakes page (https://centreforeffectivealtruism.org/our-mistakes) is incomplete. We're currently nearing the end of our search for a new CEO, and one of the things that I think they're likely to want to do is to communicate more with the community, and solicit the community's thoughts on future plans.

my experience is that CEA circa late 2019 is intensely self-reflective; I'm prompted multiple times in the average week to put serious thought into ways we can improve our processes and public communication.

Glad to hear it!

I guess a practical way to measure creativity could be to give candidates a take-home problem which is a description of one of the organization's current challenges :P I suspect take-home problems are in general a better way to measure creativity, because if it's administered in a conversational interview context, I imagine it'd be more of a test of whether someone can be relaxed & creative under pressure.

BTW, another point related to creativity and exclusivity is that outsiders often have a fresh perspective which brings important new ideas.

Not sure what "attenuation" means in this context.

It's probably correction for attenuation: 'Correction for attenuation is a statistical procedure ... to "rid a correlation coefficient from the weakening effect of measurement error".'

Ah, thanks! So as a practical matter it seems like we probably shouldn't correct for attenuation in this context and lean towards the correlation coefficient being more like 0.26? Honestly that seems a bit implausibly low. Not sure how much stock to put in this paper even if it is a meta-analysis. Maybe better to read it before taking it too seriously.

I'd correct for attenuation, as we care more about getting the people who in fact will perform the best, rather than those who will seem like they are performing the best by our imperfect measurement.

Also selection procedures can gather other information (e.g. academic history, etc.) which should give incremental validity over work samples. I'd guess this should boost correlation, but there are countervailing factors (e.g., range restriction).

Oh interesting, I was thinking it would be bad to correct for measurement error in the work sample (since measurement error is a practical concern when it comes to how predictive it is.) But I guess you're right that it would be reasonable to correct for measurement error in the measure of employee performance.

I'm most concerned about attempts to politicise the movement as unlike most of the other risks, this risk is adversarial. EA has to thread the needle of operating and maintaining our reputation in a politicised environment without letting this distort our way of thinking.

Can you give some examples of attempts to politicise the movement? I can make some guesses as to what you're referring to but I'm not sure.

This is a very cool question I hoped to think about more. Here's the 5 I came up with (in a draft that I'm unlikely to finish for various reasons), but without further exploration how they would look like:

1. Collapse. The size and quality of the group of people that identify as community members reduces by more than 50%

2. Splintering. Most people identify themselves as '[cause area/faction] first, EA second or not at all'.

3. Plateau/stunted growth. Influence and quality stagnates (i.e size and quality change by -50% to +100%)

4. Harmless flawed realization. EA becomes influential without really making a decidedly positive impact

5. Harmful flawed realization. EA becomes influential and has a significantly negative impact.

6. 'Extinction'. No one identifies as part of the EA community anymore

I also asked Will MacAskill for "x-risks to EA", he said:

  1. The brand or culture becomes regarded as toxic, and that severely hampers long-run growth. (Think: New Atheism.)
  2. A PR disaster, esp among some of the leadership. (Think: New Atheism and Elevatorgate).
  3. Fizzle - it just ekes along, but doesn’t grow very much, loses momentum and goes out of fashion.

Anyway, if you want to continue with this, you could pick yours (or a combination of risks with input from the community) and run a poll asking people's probability estimates for each risk.

The Sequestration scenario outlined here is well articulated and struck a chord with me: the EA Hotel’s struggle to gain support (and funding) from those at the centre of the movement* seems like it could be a possible symptom of it.

Also:

I don’t want to be seen as arguing for any position in the debate about whether and how much to prioritize those who appear most talented—a sufficiently nuanced writeup of my thoughts would distract from my main point here.

I would be interested to read your thoughts on this, and intend to write about it more (and how it fits in with the value proposition of the EA Hotel) myself at some point.

*Note: we have had a good amount of support from those in the periphery.

This is a great post in both content and writing quality. I'm a little sad that despite winning a forum prize, there was relatively little followup. 

Is there some taxonomy somewhere of the ways different social/intellectual movements have collapsed (or fizzled)? Given that information, we'd certainly have to adjust for the fact that EA:

  • Exists in the 21st century specifically, with all the idiosyncrasies of the present time
  • Is kind of a hybrid between a social movement and an intellectual movement: it's based on rather nuanced ideas, is aimed at the highly-educated, and has a definite academic component (compare mainstream conservatism/socialism with postmodernism/neoliberalism)

But still, I'd guess there's potentially a lot of value in looking at the outside view.

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

“What could kill effective altruism?”

This is a tricky question to answer, especially on a public forum about effective altruism, but the author handles a controversial subject with remarkable grace (and, in my view, correctly identifies the scenarios most worth worrying about).

Points I appreciated about this post:

  • It opens with a summary of points to come, which welcomes readers into a long post and allows for easy excerpting and sharing.
  • It doesn’t single out particular people or organizations for blame. Sometimes, that might be necessary for a post, but it also drives people to take sides and endangers the quality of the ensuing conversation. EA belongs to all of us, and I like a framing that presents risks to EA as problems we can all work on, rather than problems that must be solved by a few specific people.
  • It concludes with a reminder of why it is important that effective altruism not die. Posts about flaws or weaknesses in a movement (or almost anything) can sometimes linger as a feeling that it isn’t worth saving; instead, something that stuck with me is the term “precious cargo” to describe our collection of ideas worth preserving.

Anatoly Karlin has argued that wokeism/SJWism has done significant damage to the Effective Altruism movement.

Curated and popular this week
Relevant opportunities