Hide table of contents

1. Intro

In this post I describe a phenomenon that I think is more common than we give it credit for: “EA disillusionment.” By this, I basically mean a process where someone comes into EA, engages heavily, but ends up feeling negatively toward the movement.

I know at least a handful of people who have experienced this (and I’m sure there are many more I don’t know)—people who I think are incredibly smart, thoughtful, caring, and hard-working, as well as being independent thinkers. In other words, exactly the kind of people EA needs. Typically, they throw themselves into EA, invest years of their life and tons of their energy into the movement, but gradually become disillusioned and then fade away without having the energy or motivation to articulate why.

I think this dynamic is bad for EA in multiple ways, some obvious, some less so. Obviously, people getting disillusioned and leaving is not fun (to put it mildly) for them, and obviously it’s bad for EA if promising people stop contributing to the movement. But I think the most important downside here is actually that it results in a major blindspot for EA: at present, the way people tend to become disillusioned means that they are (a) unusually likely to have the exact kinds of serious, thoughtful critiques that the EA community most wants to hear, but (b) unusually unlikely to offer them. The result is that EA stays blind to major problems that it could otherwise try to improve on.

Why would this be true?

  • (a) The kind of people I mean are unusually likely to have useful, major critiques to offer because they have spent years immersing themselves in the EA world, often changing careers for EA reasons, developing EA social circles, spending time on community building, and so on.
  • (b) But, they’re unusually unlikely to offer these critiques, because by the time they have developed them, they have already spent years pouring time and energy into EA spaces, and have usually ended up despairing the state of the community’s epistemics, social dynamics, error correction processes, etc. This makes the prospect of pouring even more time into trying to articulate complicated or nuanced thoughts especially unappealing, relative to the alternative of getting some distance and figuring out what they want to be doing post-EA.

I believe a healthier EA movement would be one where more people are able to go through a gentler version of the “disillusionment pipeline” described below, so that they come out the other side with useful perspectives on EA that they are more willing, able, and encouraged to share.

This post aims to do 4 things:

  1. Acknowledge the existence of a significant group of people who engage heavily with EA but end up feeling negatively toward the movement (“disillusioned EAs”), who tend to fade away quietly rather than making their concerns known.
  2. Describe a 3-phase pipeline of EA disillusionment: infatuation, doubt, and distancing.
  3. Point out some of the (in my view, real and important) problems that drive people through these three stages.
  4. Make some suggestions for individuals at various stages of this pipeline, including the possibility that it’s valuable to lean into feelings of doubt/distance/disillusionment, rather than trying to avoid them.

This intro has covered point (1); the rest of the post covers (2)-(4).

A core idea of this post is that going through an extreme version of the first (“infatuation”) phase of the disillusionment pipeline can be very harmful. Infatuation causes people to reorient huge parts of their lives—careers, social circles, worldviews, motivational structures—around EA, making it somewhere between painful and impossible to contemplate the idea that EA might be wrong in important ways.

2. What EA disillusionment looks like

Everyone walks their own path, so I don’t claim the below is a faithful representation of what any one person has gone through. But roughly speaking, the pattern I’m pointing to looks something like the following…

Phase 1: Infatuation

Person discovers EA and is immediately taken by it. Often they’re starting from either feeling a total lack of meaning and purpose, or from feeling overwhelmed by the world’s problems and confused why no one else seems to care about how best to help. Maybe they have already started thinking through some EA-related ideas (e.g. cost effectiveness, opportunity cost), and are thrilled to find others who see the world the same way.

  • “Ahh there's so much terrible stuff in the world what do I do why does no one seem freaked out about this” / "Why do people's brains seem to turn off when thinking about charity, why do all the smart people around me balk at using basic analytic tools to think about how to do good”
  • “Wait wait there's this whole EA movement that's all about doing the most good? There are thousands of these people? There are books and podcasts and Oxford professors and billionaire donors with professional foundations? Thank goodness, I've found my people, I'm home safe”
  • “OK well looks like these people have things figured out, so my job is surely to slot in and find where this community thinks I can contribute most and put my head down and work hard and save the world” / “These people seem more right than any group I’ve ever interacted with, and they seem very sure about XYZ—so probably deferring to that has higher expected value than anything I can come up with on my own”

For people who feel this “click” when they first encounter EA, the feeling can be totally intoxicating. Discovering a community that shares some of your core priorities/values/intellectual assumptions, and that has spent many more person-hours figuring out their implications, naturally leads you to grant that community a lot of goodwill… even if you didn’t check all of their conclusions 100%, and some of them seem a little off. Often, this infatuation phase leads people to put in years of effort, including changing career tracks, leaving old social or intellectual circles, spending time on community building/preaching the gospel, etc.

Newcomers who are perceived as very promising can have an especially overwhelming version of this experience, with community members (even community leaders) telling them they’re super smart and high-potential, offering to pay high salaries and cover travel costs, rapidly bringing them into high-trust spaces, etc.

Phase 2: Doubt comes in

Person finds their feet in EA; after originally feeling swept off their feet, they start to notice that maybe things don’t add up quite the way they thought.

  • “Wait hang on, I've been putting my head down and following the community consensus… but I’m not actually sure the messages I'm getting make sense”
  • “Hm, when I poked on things I was unsure of, some people gave answers that didn't make sense and some people told me they just think it because other (high status) people think it and some people told me they had no idea and didn't think it was obviously right… so now I’m really confused”
  • “Wait now I'm like months/years in and I feel like most of the wisdom I've absorbed was actually built on shallow foundations and I'm not convinced there's nearly as much consensus or solid thinking as it seems there is”
  • “There were a lot of points where I adopted the community consensus because I thought the people at the top had thought everything through incredibly rigorously, but now I’ve met some of those people and even though they’re smart, I don’t think they’re nearly as reliable as I had been assuming”
  • “…and also the world is HUGE and complicated and we're only accounting for tiny pieces of how it works and also we're all so young and naive”
  • “Aaaaaah”

Different people cope with this phase differently. Some move on relatively quickly to phase 3; others get stuck here, feeling conflicted between growing doubts and impulses to squash those doubts in the name of Doing The Most Good™. At least one friend has described feeling paralyzed/unable to think for themselves about some of the key issues in question, because at this point so much of their career, social circles, and motivational structures are built around believing EA dogma.

Phase 3: Distancing

Person gradually, painfully comes to terms with the fact that EA actually can’t offer them what they hoped. They grapple with the idea that not only have they wasted years of work on ideas they’re no longer sure are right, but they also now probably think that many of their coworkers, friends, and even former heroes/mentors are misguided (or even actively causing harm).

  • "EA seems to be more of a social club of people agreeing with each other's takes than a serious intellectual community offering something novel to the world. It pretends to be purely about finding truth and doing good, but of course in reality it has its own status, signaling, hierarchies, and bullshit. I give up on finding people who take thinking about these problems really seriously"
  • "Interacting with new EAs who think EA is the gospel is actively off-putting. I can't stand the echo chambers or false confidence or people blindly agreeing with each other; so many people just seem to be repeating talking points rather than actually thinking anything through"
  • “It sure seems like a lot of effort goes into trying to persuade other people of EA talking points (e.g. “you should work on existential risk and that means either AI or bio”) rather than welcoming newcomers’ disagreements and different intuitions… that feels pretty icky, especially given how shaky some of this stuff seems”
  • “I no longer believe in what I’m doing, but neither do I have conviction that other things I can think of would be better, this is horrible what do I do”

In general, the deeper a person goes in phase 1, and the more of their life they reorient based on that, the more painful phases 2 and 3 will be. A common outcome of phase 3 right now seems to be for the person to leave EA, either explicitly or in a gradual fade-out. This may be the right call for them as an individual, but from EA’s perspective this is a bad outcome, since the community never gets to learn from or improve on what drove them away.

But it does seem possible for this to go differently; to use myself as an example, I would maybe describe myself as “quasi-disillusioned,” in that when I talk with “disillusioned” friends who inspired this post, I often agree with them on many substantive points; yet I find myself more inclined to keep engaging with EA in some ways (though with more distance than in the past) than they do. I think part of what’s going on here might be that I went through the pipeline above relatively quickly, relatively early. I think that has left me coming out of stage 3 with some emotional and intellectual distance from the community, but still with interest and motivation to keep engaging.

3. Some factors driving EA disillusionment

It’s difficult to nail down exactly what causes people to go through the pipeline described above—in part because different people experience it differently, but mostly because a lot of the relevant factors are more about “vibes” or subtle social pressures than explicit claims EA makes.

Nonetheless, here is my attempt to gesture at some of the factors that I think cause people to become disillusioned, written in the form of takeaways for EA as a movement/community:

  • In the grand scheme of things, EA “knows” so, so little about how to do good. Not nothing! But still, relative to the scale and complexity of the world, extremely little. EA could do a much better job of emphasizing that everything we’re doing is a work in progress, and to encourage people coming into the community to hold onto their skepticism and uncertainties, rather than feeling implicit pressure to accept EA conclusions (aka EA dogma) wholesale.
  • There are huge differences in how well-grounded different EA claims are; we should be much more mindful of these differences. “Donations to relieve poverty go much further in the developing world than in the developed world” or “If you care about animal welfare, it probably makes more sense to focus on farmed animals than pets because there are so many more of them” are examples of extremely well-grounded claims. “There’s a >5% chance humanity goes extinct this century” or “AI and bio are the biggest existential risks” are claims with very different epistemic status, and should not be treated as similarly solid. “You, personally, should [fill in the blank]” is different again, and is (in my view) a category of claim we should be especially wary of making strongly.[1]
  • EA sells itself as having very few intellectual assumptions or prerequisites,[2] but in practice the ideas associated with the community are built on a very specific set of paradigms and assumptions. We should be both more aware and more cautious of letting EA-approved frames solidify. Not that plenty of them aren’t useful or valid—they are. But ideas like “your career options are earning to give, community building, or direct work,” or “the two types of work on AI are technical safety and governance,” or “the most important kind of AI safety work is alignment research,” which can come to feel simple and self-evident, actually conceal a ton of conceptual baggage in terms of how to slice things up. If we accept frames like these blindly, rather than questioning what all those words mean and how they were chosen and what they subtly miss, we will do less good.
  • We should be cautious of encouraging people to fall too hard in the “infatuation” phase. Instead of just giving them positive reinforcement (which I think is currently the norm - “It’s so cool you’re getting these ideas so quickly! You’re such a natural EA!”), established community members should see it as part of their role to help people to keep their feet, take their time with major life changes, and realize that they may later feel less infatuated with EA than they do at present. My aspiration for this post is that people could send it to infatuated newcomers to partially serve this function.

An aside on EA being wrong about a bunch of stuff

A common thread in the above is that EA is sure to be making subtle mistakes about a lot of things, and huge mistakes about a few things.

On the one hand, I doubt many EAs would dispute this claim outright—EA does famously love criticism, and in practice I think the community is good at giving people positive feedback for writing up long, detailed, step-by-step arguments about things EA might be getting wrong.

But on the other hand, in many settings the community as a whole seems to give off strong implicit vibes of “we have things figured out; let us tell you how you should think about this; here are the approved talking points.” This especially seems to be directed toward people who are encountering EA for the first time.

A tension that has to be navigated here is that often people’s first objections are not novel, so strong counterarguments already exist. If someone comes to their first EA event and asks why EA doesn’t support giving money to the homeless, or why you would be worried about existential risk from AI when it’s obvious that AI isn’t conscious, then it makes sense to walk them through what they’re missing.

The problem is distinguishing situations like that from situations where their take is a bit off-kilter from the standard EA consensus, or where they’re coming at the topic from a new angle that doesn’t quite fit—which could be a sign of simply not yet understanding an argument, but could also be a sign of having something new to contribute. In cases like that, what often seems to happen is that the newcomer experiences some combination of explicit arguments and implicit social pressure to ignore the subtle discordant notes in their thinking, and instead just accept the ready-made conclusions EA has to offer. This sands the edges off their views, making them less able to think independently and less inclined to express when they do.

(I don’t think this is the place to get into details of specific examples, but in a footnote I’ll list some examples of places in my field—AI policy/governance—where I think EA thinking is much shakier than people treat it as.)[3]

So, one of my aims with this post is to nudge things slightly in a direction that gives people more implicit encouragement to foster doubts, welcome half-formed notes of unease, play around with their own ideas, and take seriously their own ability to notice gaps in mainstream EA thinking. Without this, our loud protestations that we love well-thought-through, cleanly-written-up criticism will be pretty useless.

4. Suggestions for individuals

Again, everyone experiences all this differently, so I don’t pretend to have a complete or perfect list of suggestions. But some things that might be good include the following.

Anticipate and lean into feelings of doubt, distance, and disillusionment

  • Know that other people have gone through the disillusionment pipeline, including (especially!) very smart, dedicated, caring, independent-minded people who felt strong affinity for EA. Including people who you may have seen give talks at EA Global or who have held prestigious jobs at EA orgs. Consider that you, too, may end up feeling some of the things described above.
  • If you notice yourself feeling doubts, lean into it! Sure, sometimes there will be a nicely argued counter-argument that you’ll find persuasive, but often there won’t be, especially if it’s in an area you know particularly well. Remember that the goal is to actually make the world a better place, not to agree with the EA community; EA ≠ inherently impactful. The possibility that if you think things through for yourself you might come up with a different conclusion is something you should search out, not be scared of.
  • If you’re going through something like phase 2 or 3 and finding it emotionally difficult, know that you’re not alone. If you feel like you’re losing something important to your life, give yourself grace to grieve that. Remember that you don’t have to accept or reject EA wholesale. You can stay in touch with people whose ideas or company you value; you can work on an EA cause in a non-EA way, or a non-EA cause in an EA way; you can float around the edges of the community for a while (aka take an EA-break or an EA-slow), or leave entirely if you want.

Maintain and/or build ties outside EA

  • Keep your head above water in the non-EA world. Rather than getting swept away in the EA tide, make sure you’re aware of and taking seriously the expertise, wisdom, intuitions, etc. of people from other communities who do work related to yours; they almost certainly know things and can do things that EA doesn’t and can’t. (This can still be true even if you find EA ideas, conversations, etc. more valuable on the whole.) In my work, I notice a clear difference between people who seem to only read and talk to EA sources and those who are familiar with a broader set of issues and perspectives—I much prefer working with the latter group.
  • Relatedly, be wary of cutting ties to non-EA friends, social spaces, hobbies, or other sources of meaning. If you don’t have many such ties (e.g. because you’ve just moved to a new place), actively foster new ones. This isn’t just so that you have something to catch you if you end up wanting more distance from EA—it’s also an important psychological buffer to have in place to ensure that you feel able to think clearly about EA ideas, social pressures, and so on in the first place. If your life is set up such that your brain subconsciously thinks that criticizing EA could alienate you from your entire support structure, you’re likely to have a much harder time thinking independently.

Defer cautiously, not wholesale

  • Don’t dedicate your whole career to doing what someone else thinks you should do, if you don’t really get—in your head and your gut—why it’s a good idea. Of course some amount of deferring to others is necessary to navigate life, but make sure you have explicit tags in your world model for “on this point, I’m deferring to X because Y, even though I don’t really get it;” and seriously, don’t let that tag apply to the big picture of what you’re doing with your life and work.
  • I know too many people who have gotten multiple years into a career track only to realize that they got there without really grokking what they were doing. The result is usually both that they find it very hard to stay motivated, and that they struggle to figure out how to navigate the countless day-to-day decisions about how to prioritize and what to aim for because they don’t have a deeply rooted internal sense of what they’re doing and why.

Assume EA is making mistakes, and help find them

  • Figuring out gaps, mistakes, subtle wrongnesses, etc. is so helpful! That’s what we most desperately need as a community! And it will often begin with a feeling of something being subtly off, or someone whose judgment you respect seeming incomplete. Notice those moments, treasure them, and promote them to conscious attention. If something feels wrong but you can’t immediately articulate a clear argument for why, don’t dismiss it—keep noodling away, talking with friends, and see if over time you can get more clarity on it.[4]
  • Especially if you are coming to EA after having spent much time and intellectual energy elsewhere, there will likely be things you have thought more deeply about than anyone else in the community. If you get deep into some EA issue, then it’s very likely that the fact that you have different experiences than the small number of other people working on that issue so far means you will see things that others have missed. EA needs your help pointing out the things you think are wrong or subtly off the mark!

5. Conclusion

As one friend who previewed this post put it:

“EAs who go too deep in Phase 1 will find it more professionally/ personally/ psychologically costly to accept Phase 2. I think this is bad for a couple reasons: (i) It creates a community of people who have selective blindspots/biases, i.e., refusing to ask themselves questions that will prompt Phase 2; and (ii) it means that entering Phase 2 can be completely destabilizing, leading them quickly to a ‘fuck EA’ view.

The ‘EA is the one true path’ and ‘fuck EA’ camps are both epistemically suspect. I think we need a higher percentage of people in a ‘EAs have a lot of great ideas, but the community is also misguided in a bunch of ways’ camp.”

I agree with this; one aim of this post is to argue that we should much more actively welcome and foster the existence of the "EAs have a lot of great ideas, but the community is also misguided in a bunch of ways" camp.

Ultimately, I hope that dealing better with the dynamics described in this post could result in an EA community that is more self-aware, less dogmatic, less totalizing, and thereby more able to persist and have a large positive impact into the future.

Thanks to Rebecca Kagan, Michael Page, George Rosenfeld, and a couple of others (who preferred not to be named) for comments on earlier drafts of this post.


  1. This post is a good example of people apparently internalizing a message of “you, personally, should [fill in the blank]”—without necessarily being told it explicitly. (See also that post’s author noting reluctance to include in the original post that she thinks AI might be less important than the community consensus thinks it is—talk about pressure not to question EA dogma, yikes.) ↩︎

  2. I am complicit in describing EA this way in the past; I now see that post as articulating an aspiration, not the reality of how EA works. ↩︎

  3. When I interact with EAs who are interested in AI policy/governance, they often seem very bought into various ideas and frames that seem pretty speculative to me. These include: focusing on solutions based on “international cooperation” (whatever that means); assuming a distinction between AI “capabilities” and AI “safety”; a fixation on “AGI timelines,” rather than a more flexible conception of what kinds of things we might see over what kinds of time periods; an inclination to separate out and disregard “near-term” AI risk concerns, such as those relating to fairness, social justice, and power dynamics. To be clear—many of these ideas are useful starting points, and some of the best critiques or expansions of these ideas have come from people associated with EA. But nonetheless, there seems to be an effect where ideas like this get codified and then passed onto newcomers as if they were the Correct Takes or The Way Things Are, rather than as if they were useful preliminary thinking tools. ↩︎

  4. (For instance, this post is the result of a couple years’ noodling and chatting about some stuff that felt off to me, which I couldn’t put into words very well initially, and still can’t put into words perfectly.) ↩︎

Comments62
Sorted by Click to highlight new comments since: Today at 5:38 PM

This post resonated a lot with me. I was actually thinking of the term 'disillusionment' to describe my own life a few days before reading this.

One cautionary tale I'd offer to readers is don't automatically assume your disillusionment is because of EA and consider the possibility that your disillusionment is a personal problem. Helen suggested leaning into feelings of doubt or assuming the movement is making mistakes. That is good if EA is the main cause, but potentially harmful if the person gets disillusioned in general. 

I'm a case study for this. For the past decade, I've been attracted to demanding circles. First it was social justice groups and their infinitely long list of injustices. Then it was EA and its ongoing moral catastrophes. More recently, it's been academic econ debates and their ever growing standards for what counts as truth.

In each instance, I found ways to become disillusioned and to blame my disillusionment on an external cause. Sometimes it was virtue signaling. Sometimes it was elitism. Sometimes it was the people. Sometimes it was whether truth was knowable. Sometimes it was another thing entirely. All my reasons felt incredibly compelling at the time, and perhaps they all had significant degrees of truth.

But at the end of day,  the common denominator in my disillusionment was me. I felt all the problems in these circles very intensely, but didn't have much appreciation for the benefits. The problems loomed 10x larger in my head than the benefits did. Instead of appreciating all the important things I got to think about, talk about, or work on, I thought about the demands and all the stress it brought. In my case, leaning into the disillusionment would only perpetuate the negative pattern of thinking I have in my head.

Granted, my case is an extreme one. I have a decade of experiences to look back on and numerous groups I felt affinities with. And I've had intense experiences with imposter syndrome and performance anxiety. I can confidently attribute most of these feelings to myself,  reverse some of the advice Helen offered,[1] and lean away  from disillusionment. 

But I suspect I'm not the only one with this problem. EA seems to selects for easily disillusioned personality traits (as evidenced by our love of criticism). And I also suspect that these feelings are common for young idealistic people to go through while navigating what it means to improve the world. Not everyone should be leaning into that.

  1. ^

    I'm practicing "maintain and/or build ties outside EA". It requires intentional effort on my part since making + maintaining adult friends is always hard. However, it has helped me realize my disillusionment still exists outside EA. I'm partly reversing "anticipate and lean into feelings of doubt..." since I like the anticipating part, but not the leaning part. I'm reversing "assume EA is making mistakes and help find them" since I need to see more of the positive and less of the negative. I don't have any thoughts on "defer cautiously, not wholesale" since this comes naturally to me.

Helen's post also resonated a lot with me. But this comment even more so. Thank you, geoffrey, for reminding me that I want to lean away from disillusionment à la your footnote :-)

(A similar instance of this a few months back: I was describing these kinds of feelings to an EA-adjacent acquaintance in his forties and he said, "That doesn't sound like a problem with EA. That sounds like growing up." And despite being a 30-year-old woman, that comment didn't feel at all patronising, it felt spot on.)

Would be interested to hear what the problems/feelings in your case were :)

Thank you for writing this - a lot of what you say here resonates strongly with me, and captures well my experience of going from very involved in EA back in 2012-14 or so, to much more actively distancing myself from the community for the last few years. I've tried to write about my perspective on this multiple times (I have so many half written Google docs) but never felt quite able to get to the point where I had the energy/clarity to post something and actually engage with EA responses to it. I appreciate this post and expect to point people to it sometimes when trying to explain why I'm not that involved in or positive about EA anymore.

For what it’s worth: I would very much like to read your perspective on this one day.

Might the possibility of winning $20K help you get over the hill?

I’d be happy to comment on drafts, of course.

No pressure, of course—I imagine you’re very busy with your new role. In any case I’ve appreciated the conversations we’ve had about this in the past.

Thanks Peter - I continue to feel unsure whether it's worth the effort for me to do this, and am probably holding myself to an uncecessarily high standard, but it's hard to get past that. At the same time, I also haven't been able to totally give up on the idea of writing something either - I do have a recent draft I've been working on that I'd be happy to share with you. 

I thought about the criticism contest, but I think trying to enter that creates the wrong incentives for me. It makes me feel like I need to write a super well-reasoned and evidenced critique which feels too high a bar and if I'm going to write anything, something that I can frame more as my own subjective experience feels better. Also, if I entered and didn't win a prize I might feel more bitter about EA, which I'd rather avoid - I think if I'm going to write something it needs to be with very low expectations about how EAs are going to respond to it.

I agree very much! I have a lot of half-finished but mostly not-even-really started drafts myself, and one thing that resonated in the OP was the need for spaces where those hunches can be explored, as opposed to expecting thought-through and well-articulated criticisms. 

Perhaps worth mentioning, for people who don’t know Jess: Jess is Head of AI Policy at the Centre For Long-term Resilience, an impressive UK think tank. She also has a good blog.

On individual advice: I'd add  something about remembering that you are always in charge and should set your own boundaries. You choose what you want to do with your life, how much of EA you accept, and how much you want to use to influence your choices. If you're a professional acrobat and want to give 10% of your income to effective charities, that's a great way to be an EA. If someone points out that you also have a degree in computer science and could go work on AI safety, it's fine to reply "I know but I don't want to do that". You don't need to defend or justify your choices on EA grounds.

(That doesn't mean you might not want to defend some choice you've made. The research side of EA is all about making and breaking down claims about what actions do the most good. But people's personal choices about how to act don't themselves constitute claims about the best way to act.)

EA is a highly intellectual community, so I worry that people  feel the need to justify or defend anything they do or any choice they make though an EA lens, and this might make EA infiltrate their life more than they are actually comfortable with and fail to set the right boundaries. People should do EA things because and to the extent that they want to, and the EA community should be there as a resource to help them do that. But EA should justify itself to you, not the other way round.

I don't buy this. Perhaps I don't understand what you mean.

To press, imagine we're at the fabled Shallow Pond. You see a child drowning. You can easily save them at minimal cost. However, you don't. I point out you could easily have done so. You say "I know but I don't want to do that". I wouldn't consider that a satisfactory response.

If you then said "I don't need to justify my choices on effective altruist grounds" I might just look at your blankly for a moment and say "wait, what has 'effective altruism' got to do with it? What about, um, just basic ethics?" Our personal choices often do or could affect other people.

I don't think that people should endlessly self-flaggelate about whether they are doing enough. You need to recognise it's a marathon, not a sprint, and there are serious limits to what we can will ourselves to do, even if think it would be a good idea in principle. And it's important to be as kind, forgiving, and accepting to ourselves as we think we should be to others we love. But what youve said, taken as face value, seems carte blanche for not trying.

I didn't think Askell's comment was advocating not trying. The examples given - giving 10% and going into AI safety- were pretty demanding.

I guess I'd be much more likely to assume good intentions to someone who says 'yeah, I don't want to work on AI safety', than someone who says 'yeah I don't want to step into a shallow pond to rescue a child'. In the first example I'd think something like 'ok, this person has considered working in this area, decided again it, and doesn't want to explain their reasoning to me at this point in time'. I think that's fine. It can be quite draining to repeatedly be asked to justify to others why you're not working in the area they judge to be the highest priority.

Agreed! I wrote a post about exactly this. Julia Wise also has a good one on similar topics.

For me, a big change happened when I had been around in EA long enough, done enough things, and spoken to enough people to be able to say, "if I say something disagreeable to somebody and it turns out they are one of those people who will judge me personally for disagreeing with the dominant paradigm on x thing, it's their loss, not mine." I also feel I can say something disagreeable to people and they will tend to hear me out rather than ignore me as a newbie who doesn't know anything (in fairness, when I was just starting, I actually didn't know much at all!).

For the newest people (I've only been in EA for 2 years, so I am still quite new) with few legible achievements and almost no connections, this is more difficult. If you constantly feel you have to think about how to get into x office or get in the good graces of y person or receive z grant, you feel far more pressure to fight against your own doubt for fear that people will judge you for your disagreements. This is unhealthy. 

Obviously there is a continuum: it's not that you can either disagree all the time or never disagree, but there are varying amounts of disagreement that people feel comfortable having.

Sometimes people talk about "f**k you money" to mean money that you can use to ride out unemployment if you decide you don't like your job anymore and want to quit. In EA circles there is an analogous concept, something like "I respectfully disagree with your worldview social capital" or "I respectfully disagree with your worldview concrete achievements that you cannot ignore". The more of that you have, especially the latter, the better. Luckily, the latter is also relatively correlated with how much you've been able to achieve, which is (hopefully) correlated with impact.

Love the analogy of "f**k you money" to "I respectfully disagree with your worldview social capital" or "I respectfully disagree with your worldview concrete achievements that you cannot ignore"!

Know that other people have gone through the disillusionment pipeline, including (especially!) very smart, dedicated, caring, independent-minded people who felt strong affinity for EA. Including people who you may have seen give talks at EA Global or who have held prestigious jobs at EA orgs.

Also, I think even people like this who haven't gone through the disillusionment pipeline are often a lot more uncertain about many (though not all) things than most newcomers would guess. 

I'd say I'm a newcomer to EA. I'd also say I'm in Phase 1. In other words, I'd say I’m an infatuated newcomer reading a post meant to warn newcomers against infatuation.

I agree with Helen's overall argument, and I think it applies to most ideologies, movements, passions, and ideas. Overcommitment leads to burnout and backlash and eventual disillusionment — people shouldn’t become overly infatuated with anything.

Figuring out gaps, mistakes, subtle wrongnesses, etc. is so helpful! That’s what we most desperately need as a community! And it will often begin with a feeling of something being subtly off…

I have a feeling of something being subtly off, and the thing that feels wrong is the tone of this post.

Before I clarify my critique, I want to strongly affirm my agreement with Helen’s thesis and thank her (as well as those she thanks) for writing and editing this post.

This post feels condescending. I feel like a child being instructed on how to digest an ideology that I’m told is far too complicated for little me to explore on my own. I know that wasn’t at all Helen’s purpose; I know she had entirely good intentions. However, the post feels like it was written and edited exclusively by people who are intimately related to the EA movement, and never given to an inexperienced layman to digest.

I’m merely following Helen’s own instructions — I felt something subtly off, and I’m pointing it out. I hope those reading my comment realize that my goal is not to merely critique Helen, but also to inform future writers of future posts to keep in mind the importance of including the voices of the newcomers in posts directed at those newcomers.

-Munn

I'm delighted that you went ahead and shared that the tone felt off to you! Thank you. You're right that I didn't really run this by any newcomers, so that's on me.

(By way of explanation, but not excuse: I mostly wrote the piece while thinking of the main audience as being people who were already partway through the disillusionment pipeline - but then towards the end edited in more stuff that was relevant to newcomers, and didn't adjust who I ran it by to account for that.)

I'm a not-newcomer who's considering how to relay the takeaways from this post to my local EA group. I didn't get condescendingness that you did, but I'm gonna take that into account now. So thanks!

nonn
2y24
0
0

I'm somewhat more pessimistic that disillusioned people have useful critiques, at least on average. EA asks people to swallow a hard pill "set X is probably the most important stuff by a lot", where X doesn't include that many things. I think this is correct (i.e. the set will be somewhat small), but it means that a lot of people's talents & interests probably aren't as [relatively] valuable as they previously assumed.

That sucks, and creates some obvious & strong motivated reasons to lean into not-great criticisms of set X. I don't even think this is conscious, just vague 'feels like this is wrong' when people say [thing I'm not the best at/dislike] is the most important. This is not to say set X doesn't have major problems

They might more often have useful community critiques imo, e.g. more likely to notice social blind spots that community leaders are oblivious to.

Also, I am concerned about motivated reasoning within the community, but don't really know how to correct for this. I expect the most-upvoted critiques will be the easy-to-understand plausible-sounding ones that assuage the problem above or social feelings, but not the correct ones about our core priorities. See some points here: https://forum.effectivealtruism.org/posts/pxALB46SEkwNbfiNS/the-motivated-reasoning-critique-of-effective-altruism

What does 'iml' stand for?

typo, imo. (in my opinion)

I would guess both that disillusioned people have low value critiques on average, and that there are enough of them that if we could create an efficient filtering process, there would be gold in there.

Though another part of the problem is that the most valuable people are generally the busiest, and so when they decide they've had enough they just leave and don't put a lot of effort into giving feedback.

nonn
2y21
0
0

I'd add a much more boring cause of disillusionment: social stuff

It's not all that uncommon for someone to get involved with EA, make a bunch of friends, and then the friends gradually get filtered through who gets accepted to prestigious jobs or does 'more impactful' things in community estimation (often genuinely more impactful!)

Then sometimes they just start hanging out with cooler people they meet at their jobs, or just get genuinely busy with work, while their old EA friends are left on the periphery (+ gender imbalance piles on relationship stuff). This happens in normal society too, but there seem to be more norms/taboos there that blunt the impact.

I really appreciated you writing this up. A couple of thoughts.

First, a useful frame: when people are unhappy about a group, any group, they have two choices. They can either 'exit' or 'voice'. This was originally discussed in relation to consumers buying goods, but it's general and as simple as it sounds: you leave or you complain. Which of those people do is more complicated: it depends, afaict, on how loyal they are to the group and also whether they think voicing their concerns will work. 

It seems like what you're saying is that people come into EA, become loyal to the idea, get disillusioned because the reality doesn't live up to the hype, but then exit rather than voice. The reason they exit is some combination not wanting to harm the project or be seen as a bad actor, and because they don't think their criticisms will be listened to. 

Second, lots of this really resonates with me. EA sort of sells itself as being full of incredibly smart, open-minded, kind, dedicated people. And, for the most part, they are - at least by the starts of the rest of the world. But they are people still: prone to ego, irrationality, distraction, championing their pet projects, sticking to their guns, and the rest. And these people work together in groups we call 'organisations'.  And even with the best people, getting them to work together and work differently is a struggle...  It is a recipe for disillusionment.

(I recognise I'm not offering any solutions here, sorry...)

The biggest reason I am / have been disillusioned is that ethics are subjective (in my view, though I feel very confident). I don’t understand how a movement like EA can even exist within this paradigm unless 

  1.  The movement only serves as a knowledge keeper of how to apply epistemics to the real world, with a specific focus on making things “better”, better being left undefined, but does not engage in object level/non research work outside of education. Which is almost just LW. 
  2.  The movement splits into a series of submovements which each have an agreed upon ethical framework. Thus every cause area/treatment can be compared under a standardized cost benefit analysis, and legitimate epistemic progress can be made. Trades can be made between the movements to accomplish shared ethical goals. Wars can be waged when consensus is impossible (sad). 

Clearly, neither of the above suggestions are what we are currently doing. It feels low integrity to me. I’m not sure how we have coalesced around a few cause areas, admitted each is justified basically because of total utilitarianism, and then still act like we are ethically agnostic. That seems like a very clear example of mental gymnastics.

Once you get in this mindset, I feel like it immediately becomes clear that EA in fact doesn’t have particularly good epistemics. We are constantly doing CBAs (or even worse, just vaguely implying things are good without clear evidence and analysis) with ill defined goals.  Out of this many problems emerge. We have no institutional system for deciding where our knowledge is at and checking decision making powers (decentralization is good and bad though). Billionaires have an outsized ability to imprint their notion of ethics on our movement. We hero worship. We pick our careers as much on what looks like it will be funded well by EA and by what other top EAs are doing as what seems in theory to be the best thing to us. Did you get into AI safety because it was justified  under your world view or did you adopt a worldview because people who seemed smart convinced you of AI safety before you even had a clearly defined worldview? 

One reason I’ve never really made comments like this on the forum is that it feels sort of silly. I would get it if people feel like there isn’t a place for anti-realists here, since once you go down the rabbit hole literally everything is arbitrary. Still I find myself more aligned with EAs by far than anyone else in my thinking patterns, so I never leave.

EA is not ethically agnostic. It is unquestionably utilitarian (although I feel there is still some debate over the "total" part). Is this a problem for people of other ethical viewpoints? I don't know, I can't speak for other people. But I think there's a lot of value to where the utilitarian rubber meets the road even if you have other moral considerations, in the ruthless focus on what works. For example, I still maintain a monthly donation to GiveDirectly even though I know it is much less cost effective than other givewell top charities. Why? Because I care about the dignity afforded by cash transfers in a non-consequentialist way, and I will comfortably make that choice instead of having a 10x larger impact from AMF or something like that. So I follow the utilitarian train up to a certain point (cash transfers work, we measure the impact with evidence) and then get off the train.

In this metaphor, EA keeps the train going until the end of the line (total utilitarianism?) But you don't need to stay on until the end of the line. You can get off whenever you want. That makes it pretty freeing. The problem comes only when you feel like you need to stay on the train until the end, because of social pressure or because you feel clueless and want to be in with the smart people.

The eternal mantra: EA is a question, not an answer. And even if the majority of people believe in an answer that I don't, I don't care - it's my question as much as it is theirs.

If EA is unquestionably utilitarian I don’t really like the vocabulary we use. Positive impact, altruism, global priorities research are all words that imply ethical agnosticism imo, or just seem somewhat disingenuous if not without proper context.

Also it’s a bit unclear to me that EA is unquestionably utilitarian. Is there some official statement by a top org saying as much?

On 80k’s “common misconceptions about ea” : “ Misconception #4: Effective altruism is just utilitarianism”.

Open Phil talks about world views they consider “plausible”, which isn’t explicitly anything nor is it compatible with anti realism.

I don’t doubt that EA operates as a utilitarian movement. But if this is more or less official than there should be more transparency.

Yes, EA is much broader than utilitarianism. See comment above and Will's paper.

I agree EA needs some kind of stance on what 'the good' means.

In this paper, MacAskill proposes it should (tentatively) be welfarism, which makes sense to me.

It's specific enough to be meaningful and capture a lot of what we care about, but still broad enough to have a place for many moral views.

In this paper, MacAskill proposes it should (tentatively) be welfarism, which makes sense to me.

See also this recent post by Richard Chappell.

I ostensibly agree but who would decide such a stance? Our community has no voting or political system. I would be relatively happy for us to go with welfarism/beneficentrism, but I feel uncomfortable with the idea of a bunch of higher ups in the community getting together and just outright deciding this. 

This thread does not fit my view, to be honest: to talk about "the community" as a single body with an "official" stance, to talk about "EA being utilitarian"...

EA is, at least for me, a set of ideas much more than an identity. Certainly, these ideas influence my life a lot, have caused me to change jobs, etc.; yet I would still describe EA as a diverse group of people with many stances, backgrounds, religions, ethical paradigms, united by thinking about the best ways for doing good.

In my life, I've always been interested in doing good. I think most humans are. At some point, I've found out that there are people who have thought deeply about this, and found really effective ways to do good. This was, and still is, very welcome to me, even if some conclusions are hard to digest. I see EA ideas as ways to get better at doing what I always wanted, and this seems like a good way to avoid disillusionment.

(Charles_Guthmann, sorry for having taken your thread into a tangent. This post and many of the comments hinges somewhat on "EA as part of people's identity" and "EA as a single body with an official stance", and your thread was where this became most apparent to me.)

Mau
2y10
0
0

I agree with a lot of this, although I'm not sure I see why standardized cost benefit analysis would be necessary for legitimate epistemic progress to be made? There are many empirical questions that seem important from a wide range of ethical views, and people with shared interest in these questions can work together to figure these out, while drawing their own normative conclusions. (This seems to line up with what most organizations affiliated with this community actually do--my impression is that lots more research goes into empirical questions than into drawing ethical conclusions.)

And even if having a big community were not ideal for epistemic progress, it could be worth it on other grounds, e.g. community size being helpful for connecting people to employers, funders, and cofounders.

I think I overstated my case somewhat or used the wrong wording. I don’t think standardized cbas are completely necessary for epistemic progress. In fact as long as the cba is done with outputs per dollar rather than outcomes per dollar or includes the former in the analysis it shouldn’t be much of a problem because as you said people can overlay their normative concerns.

I do think that most posts here aren’t prefaced with normative frameworks, and this is sometimes completely unimportant(in the case of empirical stuff), or in other cases more important(how do we approach funding research, how should we act as a community and individuals as a part of the community). I think a big part of the reason that it isn’t more confusing is that as the other commenter said, almost everyone here is a utilitarian.

I agree that there is a reason to have the ea umbrella outside of epistemic reasons. So again I used overly strongly wording or was maybe just plainly incorrect.

A lot of what was going on in my head with respect to cost benefit analyses when I wrote this comment was about grantmaking. For instance, If a grantmaker says it’s funding based on projects that will help the long term of humanity, I feel like that leaves a lot on the table. Do you care about pain or pleasure? Humans or everyone?

Inevitably they will use some sort of rubric. If they haven’t thought through what normative considerations the rubric is based on, the rubric may be somewhat incoherent to any specific value system or even worse completely aligned with a specific one by accident. I could imagine this creating non Bayesian value drift, since while research cbas allow us to overlay our own normative frameworks, grants are real world decisions. I can’t overlay my own framework over someone else’s decision to give a grant.

Also I do feel a bit bad about my original comment because I meant the comment to really just be a jumping off point for other anti-realists to express confusion about how to talk about their disillusionment or whether there is even a place for that here but I got side tracked ranting as I often do.

Talking about "there should be spaces to articulate hunches", I am now taking the plunge and speed-writing a few things that have felt slightly off in EA over the last few years. They are definitely not up to the standard of thought-through critique (I've tried to indicate confidence levels to at least show how they might compare against each other), but I'd like to know if any of them resonate with other people here who might've thought about them more.  The list isn't complete, either.

Things that feel off in EA:

  • Lots of white male people working on AI alignment (pretty sure that this is not great)
  • Issues in the community
    • feeling unwelcome as a woman, and from a non-maths background... why is this still a thing?? >>> feeling quite disillusioned after spending a lot of my early time (very sure)
    • the negative mental health effect of being surrounded by an optimising mindset all the time (and, as a result, internalising "you need to be better", as opposed to "you are enough (as a person)", which maybe results in people being valued/gaining status by how impactful they are, and thereby conflating worth-as-a-person with worth-of-your-actions?) (Very sure, at least in my own experience)
  • How unaware are we of the place this movement takes in the grander scheme of things? I sometimes worry about the resemblance to eschatological movements, but also biases that we pretty surely have. (Unsure about the eschatology stuff, but definitely worried about blindspots)
  • How would a community with a different background approach the question of how to do the most good?
  • What could we learn from an STS/sociology/anthropology analysis of the movement? (these fields tend to study e.g. where a (scientific) movement comes from, and what their values and maybe assumptions are... I feel like such an outside perspective might indicate some blind spots?
  • Relying so heavily on economic tools, and then thinking the solutions are unbiased? (unsure about the extent to which this is a problem, and to what extent this is based in optimising for impact as such and you can't )
  • Lack of respect for established fields, like the whole risk community, and lack of communication between fields. Not using already established methods from those fields, or making use of their expertise. Not really trying to frame things in a way that would be appealing to them (I'm basing this on conversations with staff members at the Institute for Risk and Disaster Reduction in London, and STS (science and technology studies) scholars in Edinburgh and elsewhere). (Fairly sure)
  • In general, perhaps, neglecting some less "wacky" arguments, like the common-sense argument for X-risk reduction, and also thinking about non-utilitarian values (I feel like we're on an okay path here)
  • I have a weird feeling about similarities in thinking to liberal and libertarian ideology. (Just a hunch)
    • Maybe something about the individual as unit of analysis, and calculating in terms of aggregate welfare (of individuals) ? (Medium sure?)
    • And, in turn, neglecting other values like justice, equality etc... I am kind of confused that there isn't more discussion around these values at least.
    • Maybe neglecting goodness of process over concerns over goodness of outcome? I am unsure about this myself, but the kinds of things I'm thinking about are: How important is it that decisions involving all of humanity are made in a democratic/participatory way. I think this is only partially thinking that participation is good in itself, and mostly having a hunch that it would increase quality of outcome... so it's more like "we should care about quality of process because we actually care about quality of outcome, but if we focus too much on outcome, the outcome will actually be worse. (Unsure)
      • Random side note: I've been curious about the potential of citizens' assemblies - maybe as an intervention for improving institutional decision-making? (speculative)

Relatedly, be wary of cutting ties to non-EA friends, social spaces, hobbies, or other sources of meaning. If you don’t have many such ties (e.g. because you’ve just moved to a new place), actively foster new ones. This isn’t just so that you have something to catch you if you end up wanting more distance from EA—it’s also an important psychological buffer to have in place to ensure that you feel able to think clearly about EA ideas, social pressures, and so on in the first place. If your life is set up such that your brain subconsciously thinks that criticizing EA could alienate you from your entire support structure, you’re likely to have a much harder time thinking independently.

 

When I see new peoples setting themselves up so they only spend time with other EAs, I feel worried.

It could be good to make 'have a social life outside' a norm.

I've tried to do this via keeping in touch with my friends from university, which I've found really valuable. I know others who've done it via a social scene around a hobby, or by having flatmates who aren't EAs.

I agree with this in principle... But there's a delicious irony in the idea of EA leadership (apols for singling you out in this way Ben) now realising "yes this is a risk; we should try and convince people to do the opposite of it", and not realising the risks inherent in that.

The fundamental issue is the way the community - mostly full of young people - often looks to / overrelies on EA leadership for ideas of causes to dedicate themselves to, but also ideas about how to live their life. This isn't necessarily the EA leadership fault, but it's not as if EA has never made claims about how people should live their lives before; from donating 10% of their income to productivity 'hacks' which can become an industry in themselves. 

I think there are many ways to put the wisdom of Helen's post into action, and one of them might be for more EA leadership to be more open to saying what it doesn't know. Both in terms of the epistemics but the whole how to live your life stuff. I'm not claiming EA leaders act like some kind of gurus - far from it in fact - but I think some community members often regard them as such. But one thing I think it would be great is to hear more EA leaders coming out with a tone about EA ideas like "honestly, I don't know - I'm just on this journey trying to figure things out myself, here's the direction I'm trying to move to". 

I say this for two reasons: 1) because, knowing lots of people in leadership positions, I know this is how a lot of them feel both epistemically and in terms of how to live your life as an EA but it's not said in public; and 2) I think knowing this has made me feel a lot more healthy psychological distance from EA, because it lowers the likelihood of putting leaders on a pedestal / losing my desire to think independently. 

["We're just kids feeling our way in the dark of a cold, uncaring universe trying to inch carefully towards ending all suffering and maximising pleasure of all beings everywhere". New tag-line?]

When I see new people setting themselves up so they only spend time with other EAs, I feel worried.

When you see this happen, is it usually because EA fills up someone's limited social schedule (such that they regretfully have to miss other events), or because they actively drop other social things in favor of EA? I'm surprised to see the phrase "setting themselves up", because it implies the latter.

I also wonder how common this is. Even when I worked at CEA, it seems like nearly all of my coworkers had active social lives/friend groups that weren't especially intertwined with EA. And none of us were in college (where I'd expect people to have much more active social lives).

Thanks for writing this post. I think it improved my understanding of this phenomenon and I've recommended reading it to others.

Hopefully this doesn't feel nitpicky but if you'd be up for sharing, I'd be pretty interested in roughly how many people you're thinking of:

"I know at least a handful of people who have experienced this (and I’m sure there are many more I don’t know)—people who I think are incredibly smart, thoughtful, caring, and hard-working, as well as being independent thinkers. In other words, exactly the kind of people EA needs. Typically, they throw themselves into EA, invest years of their life and tons of their energy into the movement, but gradually become disillusioned and then fade away without having the energy or motivation to articulate why."

I'm just wondering whether I should update toward this being much more prevalent than I already thought it was.

Before writing the post, I was maybe thinking of 3-5 people who have experienced different versions of this? And since posting I have heard from at least 3 more (depending how you count) who have long histories with EA but felt the post resonated with them.

So far the reactions I've got suggest that there are quite a lot of people who are more similar to me (still engage somewhat with EA, feel some distance but have a hard time articulating why). That might imply that this group is a larger proportion than the group that totally disengages... but the group that totally disengages wouldn't see an EA forum post, so I'm not sure :)

Thanks for sharing this! Do you have a sense for what the denominator is? I've previously tried to get some sense of this, and found it pretty challenging (mostly for obvious reasons like "people who have left EA are by definition harder for me to contact").

I'm guessing 3-5 people is like 1 in 50 of the EA's you know, over the course of a ~decade?

Yeah, fair question, though I think both estimating the numerator and the denominator is tricky. Probably your estimate that I know very roughly ~150-250 EAs is approximately right. But I'd be nervous about a conclusion of "this problem only affects 1 in 50, so it's pretty rare/not a big deal," both because I think the 3-5 number is more about specific people I've been interacting with a lot recently who directly inspired this post (so there could be plenty more I just know less about), and because there's also a lot of room for interpretation of how strongly people resonate with different parts of this / how completely they've disengaged from the community / etc.

That makes sense, thanks!

I wonder if you could rough numbers on this from EAF analytics? Look for people who used to post frequently and then dropped off, and then hand check the list for people who are known to stayed in the movement.

GWWC is another source of data. 40% of EA survey takers who signed report not meeting their commitment (that year), and presumably the rate among non-survey takers is much higher. I couldn't find direct data from giving what we can that was more recent than 2014.

Thanks for writing this, this resonated a lot with me. As Brad writes here, I think this provides a good response to why EAs shouldn't (immediately at least) consider leaving the EA movement if something doesn't feel right. As you point to, there's probably lots of things that EA is getting wrong, and people leaving the EA community is likely not going to fix those things. Whilst challenging, I think people with disillusionment persevering (probably with some additional distance) and pushing back on certain things will be important for EA to self-correct.

I found myself confused about the quotes, and would have liked to hear a bit more where they came from. Are these verbatim quotes from disillusioned EAs you talked to? Or are they rough reproductions? Or completely made up?

A mix! Some things I feel or have felt myself; some paraphrases of things I've heard from others; some ~basically made up (based on vibes/memories from conversations); some ~verbatim from people who reviewed the post.

I think a big thing I feel after reading this is a lot more disillusioned about community-building. 

It is really unhealthy that people feel like they can’t dissent from more established (more-fleshed out?) thoughts/arguments/conclusions. 

Where is this pressure to agree with existing ideas and this pressure against dissent coming from? (some early thoughts to flesh out more 🤔)

This post isn’t the only thing that makes me feel that there is way too much pressure to agree and way too little room to develop butterfly ideas (that are never well-argued the first time they are aired, but some of which could iterate into more fleshed-out ideas down the road if given a bit more room to fly). 

My guess is there is a lot more uncertainty among people who fleshed out a lot of the ideas that now feel unquestionable, but uncertainty is hard to communicate. It is also much easier to build on someone else’s groundwork than to start an argument from scratch, making it easier to develop existing ideas and harder to get new ones, even potentially good ones, off the ground. 

I also think it’s incredibly important to make sure people who are doing community building feel fully comfortable saying exactly what they think, even if it isn’t their impression of the “consensus” view. No talking point should ever be repeated by someone who doesn’t buy into it because that person can’t defend it if questioned because it’s not their view. My guess is the original talking points got written up as inspiration or prompts, but weren’t ever intended to be repeated without question and without buy-in. It’s such a big ask of people though to figure out that they don’t really believe the things they are saying. It is especially hard in a community that values legible thinking and intelligence so much and can be quite punishing to half-formed thoughts. There is often a long period between not fully buying-in and having a really well-fleshed out reason for disagreeing. This period where you have to admit you feel you don’t really agree but you don’t know why yet is hard to be honest about, especially in this community. I don’t think addressing this pressure has easy answers but I think addressing it seems pretty incredibly important anyway for creating healthy spaces with healthy epistemics. 

More thoughts and suggestions on how we maybe can improve

I agree with so much of this post's suggestions. I also have so many random half-baked ideas in so many half-finished google docs. 

Maybe get better at giving people who are not extremely dedicated or all-in good vibes/a good impression because we see them as future potential allies (even if we don't have the capacity to fully on-board them into the community)

It just does seem so important to make sure we have a culture where people really feel they don’t have to take all or leave all of effective altruism to have a voice in this community or to have a place in our spaces. The all-in and then all-out dynamic has a tonne of negative side-effects and I’ve definitely seen it a lot in the people I’ve known. 

I can see why the strategy of only accepting and focusing on full-in extremely dedicated people makes sense given how capacity constrained community building is and given this community probably can only accommodate so many new people at a time (detailed and nuanced communication is so time-consuming and within community trust seems important but is hard to build with too much growth too fast).

 It is challenging to create room for dissent and still have a high trust community with enough common ground for it to be possible for us to cohesively be all in the same community. 

I'm not sure exactly what a feasible alternative strategy looks like, but it seems plausible to me that we can get better at developing allies to collaborate with who come by a local group and have a good time and some food for thought without this feeling like an all-in or all-out type engagement.

It seems good to me to have more allies who give us that sometimes nuanced but sometimes (naturally) missing-the-mark critique, who feel comfortable thinking divergently and developing ideas independently. Many of those divergent ideas will be bad (of course, that's how new ideas work) but some might be good, and when they're developed, those people who got good vibes from their local group are keen to share them with us because we've left a good enough impression that they feel we'll really want to listen to them. I think there are ways of having a broader group of people who are sympathetic but who think differently who we don't try and do the "super detailed and nuanced take on every view ever had about how to help others as much as possible" thing. Not sure exactly how to do this well though, messaging gets mixed-up easily and I think there are definitely ways to implement this idea that could make things worse.

More separate communities for the thinking (the place where EA is supposed to be a question) and the doing (the place for action on specific causes/current conclusions)

Maybe it is also important to separate the communities that are supposed to be about thinking and the ones that are supposed to be about acting on current best guesses. 

Effective altruism groups sound like they sometimes are seen as a recruiting ground for specific cause areas. I think this might be creating a lot of pressure to come to specific conclusions. Building infrastructure within the effective altruism brand for specific causes maybe also makes it harder for anyone to change their minds. This makes effective altruism feel like much less of a question and much more like a set of conclusions. 

 Ideally, groups that are supposed to be about the question “how do we help others as much as possible?” should be places where everyone is encouraged to engage with the ideas but also to dissent from them and to digress for hours when someone has an intelligent objection. If effective altruism is not a question, then we cannot say it is. If the conclusions newcomers are supposed to adopt are pre-written, then effective altruism is not a question.  


Separating encouragement of the effective altruism project from the effective altruism community

Maybe also we need to be better at not making it feel like the effective altruism project and the effective altruism community come together. Groups can be places where we encourage thinking about how big a part of our lives we want the effective altruism project to be and what our best guesses are on how to do that. The community is just one tool to help with the effective altruism project, to the extent that the EA project feels like something that group members want to have as a part of their lives. If collaborating with the community is productive, then good, if a person feels like they can do the EA project or achieve any of their other goals better by not being in the community, that should be strongly encouraged too! 


More articulation of the specific problems like this one

I’m so very glad you articulated your thoughts because I think it’s posts like this that help us better capture exactly what we don’t want to be and more of what we do want to be. There have been a few and I think each one is getting us closer (and we'll iterate on each other's ideas until we narrow down what we do and don't want the effective altruism community to be). 



(just for context given I wrote way too many thoughts: I used to do quite a bit of community building and clearly have way too many opinions on stuff given my experience is so out-of-date, I am still pretty engaged with my local community, I care a lot about the EA project, a lot of my friends consider themselves engaged with the effective altruism community but many aren't but everyone I'm close to knows lots about the EA community because they're friends with me and I talk way too much about my random interests, I have a job outside the EA community ecosystem, I haven't yet been disillusioned but I cheated by having over-confident friends who loudly dissented so I think this helped a tonne in me avoiding a lot of the feelings described here) 

 

There are huge differences in how well-grounded different EA claims are; we should be much more mindful of these differences. “Donations to relieve poverty go much further in the developing world than in the developed world” or “If you care about animal welfare, it probably makes more sense to focus on farmed animals than pets because there are so many more of them” are examples of extremely well-grounded claims. “There’s a >5% chance humanity goes extinct this century” or “AI and bio are the biggest existential risks” are claims with very different epistemic status, and should not be treated as similarly solid.

 

One thing I struggle with is switching back and forth between the two types of claims.

If we have a bunch of ideas that we think are really important and not widely appreciated ('type 1' claims), it's hard to trumpet those without giving off the vibe that you have everything figured out – I mean you're literally saying that other people could have 100x the impact if only they realised.

But then when you make type 2 claims, I'm not sure that emphasising they're really unsettled really 'undoes' the vibe created by the type 1 claims.

This is compounded by type 1 claims stated clearly being much easier to spread and remember, while hedging tends to be forgotten.

I'm sure there are ways to handle this way better, but I find it hard.

Hm... thinking in terms of 2 types of claim doesn't seem like much of an improvement over thinking in terms of 1 type of claim, honestly. I was not at all trying to say "there are some things we're really sure of and some things we're not." Rather, I was trying to point out that EA is associated with a bunch of different ideas; how solid the footing of each idea is varies a lot, but how those ideas are discussed often doesn't account for that. And by "how solid" I don't just mean on a 1-dimensional scale from less to more solid—more like, the relevant evidence and arguments and intuition and so on all vary a ton, so it's not just a matter of dialing up or down the hedging.

A richer framing for describing this that I like a lot is Holden's "avante-garde effective altruism" (source):

A general theme of this blog is what I sometimes call avant-garde effective altruism. Effective altruism (EA) is the idea of doing as much good as possible. If EA were jazz, giving to effective charities working on global health would be Louis Armstrong - acclaimed and respected by all, and where most people start. But people who are really obsessed with jazz also tend to like stuff that (to other people) barely even sounds like music, and lifelong obsessive EAs are into causes and topics that are not the first association you'd have with "doing good." This blog will often be about the latter.

I don't think it has to be that complicated to work this mindset into how we think and talk about EA in general. E.g. you can start with "There's reason to believe that different approaches to doing good vary a ton in how much they actually help, so it's worth spending time and thought on what you're doing," then move to "For instance, the massive income gap between countries means that if you're focusing on reducing poverty, your dollar goes further overseas," and then from there to "And when people think even more about this, like the EA community has done, there are some more unintuitive conclusions that seem pretty worthy of consideration, for instance..." and then depending on the interaction, there's space to share ideas in a more contextualized/nuanced way.

That seems like a big improvement over the current default, which seems to be "Hi, we're the movement of people who figure out how to do the most good, here are the 4 possibilities we've come up with, take your pick," which I agree wouldn't be improved by "here are the ones that are definitely right, here are the ones we're not sure about."

Thanks for this post - dealing with this phenomenon seems pretty important for the future of epistemics vs dogma in EA. I want to do some serious thinking about ways to reduce infatuation, accelerate doubt, and/or get feedback from distancing. Hopefully that'll become a post sometime in the near-ish future.

Thank you for this, particularly in a way that feels (from someone who isn't quite disillusioned) considerate to people who are experiencing EA disillusionment. I definitely resonate with the suggestions - these are all things I think I should be doing off, particularly cultivating non-EA relationships since I moved to the Bay Area specifically to be in an EA hub.

Also really appreciate your reflection on 'EA is a question' as more of an aspiration than a lived reality. Myself, along with other community-builders I know, would point to that as a 'definition' of EA but would (rightly) come across people who felt like that simply wasn't very representative of the community's culture.

Thanks for writing this post. I think this would be very good to have as a required reading for fellowship programs.

[anonymous]1y3
0
0

Thank you for writing this!

I recently encountered the EA community (Sep 16, 2022), and it is extremely unlikely I will ever become a member of the EA community. This being said, I think there is a lot of room for me to collaborate with EA and bring a healthy outsider perspective and contribute a lot of value.

I am going to drop a personal and individual account of my own experience here with the hope that it can be used as a piece of empirical  piece of evidence for your proposed solution. Specifically,  by fostering people like me who think "EAs have a lot of great ideas, but the community is also misguided in a bunch of ways"  and by preventing deep initial infatuation.  If there is more evidence similar to my own experiences, I think this post should be part of the introductory content for people looking to interact with EA.

Reading this post allowed me to more easily break the three point cycle of infatuation, doubt, and distancing.  When I first discovered EA through Giving What We Can and saw how few people out of those with the means to do so had signed the pledge, I was worried EA's ideas might be subject to pressures which would eventually cause it to die out because the community was so self sacrificing. This probably could have led me down a path of infatuation. Fortunately, I quickly discovered the forum with its enormous number of community critiques,  which led me to become suspicious enough of effective altruism to make sure it remains a small portion of my interpersonal network.

Your post probably had a very large positive impact on me, if only preventing massive burn out (I'm pretty susceptible to working too hard if everyone around me thinks something is important and I do as well). It may go beyond this as I figure out how to user my outsider knowledge to offer some potential improvements for EA in EA terminology which EA will hopefully use as a way to improve.

Perhaps it's helpful to recall that every ideology ever created has inevitably sub-divided in to competing (sometimes warring) internal factions.    The universal nature of this phenomena suggests that the source of this division and conflict process is that which all philosophies and philosophers have in common, what we're all made of psychologically, thought.

Point being, there is no way to edit EA or any other collection of ideas so as to remove division and conflict, because division and conflict are built in to the human condition at such a fundamental level as to be incurable.   

Thus, new arrivals to EA (such as myself) are perhaps best advised that EA, like all human endeavors, is a  big mess that will never be fully straightened out.    If such realism is presented right from the start, perhaps it can to some degree serve as an antidote to cynicism. 

Thank you so much for the post! It also resonated very strongly with me, so I felt I might share my own disillusionment journey here, in form of a post I wrote but never published. I wrote it in early 2020 - I followed pretty much the path the OP describes, but decided "this community isn't good for me", as opposed to "I disagree with the guiding principles", so I took a break from the community for around 2-3 years, dialed-down my expectations for impact, and am now back as something like "EA-adjacent". 

I feel like psychology offers some pretty standard solutions to disillusionment, and have light-heartedly thought about whether providing an EA-targeted charity/service to address this could be worthwhile.

However, there is an ethical dilemma here or two which I mulled over for years in other contexts, with no conclusion:

1. The perfect prevention and cure for disillusionment would likely mean fewer smart people stay in EA. I.e., we successfully dissuade people who would have joined EA and become disillusioned from ever committing to EA in the first place. Of those who we retain, the plus is that they are doing so for the right reasons for them and thus in a sustainable way. We probably improve open-mindedness and decrease groupthink in the community too. Is this net positive? Is it net positive on average if disillusioned EAs had never been a part of it instead?

2. A side-effect of an effective cure for disillusionment is increased life satisfaction. Could too much life satisfaction cause decreased productivity or drive within EA? (I don't have any evidence for this, it's just a thought.)

Some other thoughts about possible factors:

  • A small number of people and organisations who set the tone for the movement seems in tension with it being 'a movement'. While movements might rally around popular figures, eg MLK, this feels like a different phenomenon where the independent funding many orgs receive make the prominence of those people less organic
  • Lack of transparency in some of the major orgs
  • No 'competition' among orgs means if you have a bad experience with one of them, it feels like there's nothing you can do about it, esp since they're so intertwined - which exacerbates the first two concerns
  • Willingness to dismiss large amounts of work by very intelligent people justified by a sense that they're not asking the right questions
  • Strong emphasis on attracting young people to the movement means that as you get older, you tend to feel less kinship with it

Other than the third, which I think is a real problem, one could argue all these are necessary - but I still find myself emotionally frustrated by them to varying degrees. And I imagine if I am, others are too.

I think the problem you raise is important and real, but I'm not sure that a post or policy or even a project would solve it, not even with improved feedback from people who are starting to drift away (which would be valuable, and which I'd love to discuss elsewhere).  I think there may be a better approach, which is more likely to happen and more likely to succeed, since it will happen (and is already happening) anyway, as with most intellectual 'movements', especially those close to an emerging Zeitgeist or perennial topic.

Here's the rub:

Should EA be like a supertanker, centrally controlled, and therefore perhaps more vulnerable* to whole-movement shipwreck or disgrace that goes unanswered

Or should it be more like a fleet/regatta, able to weave, sub-divide and reunite, or to adapt depending on storms and circumstances/needs that become evident and ways to move which become viable?

And if "more like a fleet" is the answer, wouldn't that also solve this disillusionment problem, because people could join the particular ship or sailing style which they like? 

To a degree this is inevitably happening anyway, and I've seen it often in other contexts: Mennonites are famous for sub-dividing over minor differences, while retaining an overall unity. See also Quakers, NVC, scouts, psychoanalysts, socialists, even Utilitarians .... and that's just the last century or so!

What's the advantage of a unitary movement, especially if there is no central comms/PR?


* especially since there is no required comms plan and training for the core team and leading lights, who tend to be busy writing/researching/teaching, and no obvious overall PR/reputation management strategy, which seems very high risk, considering how interesting the topics are for journalists!

[comment deleted]2y2
0
0
More from Helen
Curated and popular this week
Relevant opportunities