3 min read 15

247

Crossposted from my new blog, Otherwise. Extended somewhat since yesterday; thanks for the prompt to be more specific!

Several times in the last few weeks I’ve seen someone saying EA has become really bad for them. I don’t have anything really profound to say here, but I think people who are especially miserable in EA should seriously consider leaving.

There are lots of ways to engage with EA, and sometimes if you’re burnt out on some aspects there are other aspects that still feel viable and invigorating. But maybe there aren’t. Maybe the whole thing makes you want to hide in bed. And if so, I think you should trust your gut. Not to hide in bed, I mean, but to get some space from EA.

I realize “take some space” might be easier said than done, especially if you work in the EA space or a lot of your social life is there. Maybe try out a small experiment first: a vacation from EA-related work, reading, or actions.
........

For a while, interviewing childcare providers caused me to feel especially doom-y. No one was doing anything wrong, but I hated the whole process. I was simultaneously afraid they wouldn’t like my family enough to want to work for us, and that I’d choose someone bad. After a difficult round of interviews, we found someone and stuck with the arrangement longer than was ideal, partly so I wouldn’t have to go through the search as soon. Two years later, I was in a different headspace and able to do the task without so much worry.

.........

Who do I mean this for?

I don't mean that we should all leave when EA gets hard. I think a lot of us find EA hard but also find a kind of determination and energy in response.

But sometimes it's more like exhaustion and despair. And if you're in that zone for a while, that's when I think you should consider getting away.

(I'm imagining someone out there worried about whether they are unhappy enough to qualify. If you're on the fence, maybe take three months off and see whether it gets clearer.)

If other's people work depends on you, like if you're managing an organization or large project, that makes walking away a lot more costly. But if you're miserable in a role, I'd be surprised if you can stay in it long-term without slowly poisoning things around you.
........

A tiny minority of the people making progress on the world’s problems identify as EAs. There’s a ton of good work to be done outside this space. 

Or maybe you need to get back to more basic human stuff: eating, sleeping, moving, feeling. If you think you might need to just focus on that for a while, I encourage you to try it.

I want to tell you that taking care of yourself is what's best for impact. But is it? I don't know. I'm sure it's true in some circumstances and not in others. My guess is that if you feel like you're drowning, you need to disrupt something about your circumstances, and you'll eventually be more able to do good work (in EA or outside EA) than if you'd continue struggling in the same place.

.........

Someone rightly pointed out that this is all kind of vague. Some more concrete things that I mean:

  • I personally won't dislike you for leaving.
  • If it would feel like a relief to have someone's permission to go, you have mine.
  • I expect the EA community to be healthier / more vigorous if people who are having a terrible time move away from it.


I want people I know to be ok, and that probably slants my judgement about whether this is overall good for the world. So if you're considering leaving, maybe you should ask someone who's more of a hard-ass than me and see what they say.

Comments15


Sorted by Click to highlight new comments since:

I temporarily left the EA community in 2018 and that ended up well.

I took a time-out from EA to focus on a job search. I had a job that I wanted to leave, but needed a lot of time and energy to handle all the difficulties that come with a job search. My career path is outside of  EA organizations.

How I did it practically:
- I had  a clear starting point and wrap up existing commitments. I stopped and handed over my involvement in local community building and told my peers about the time-out. I donated my entire year's donation budget in February.
- I set myself some rules for what I would and would not do. No events, no volunteering, no interaction with the community. I deleted social media accounts that I only used for EA. I blocked a few websites, most notably 80000hours.org. I would have donated if my time-out took longer, but without any research.
- I did not set an end point. The time-out would be as long as needed. I returned soon after I signed the new contract, 8 months after my starting point. It could have been much longer.

This helped a lot to get the job search done.

I could not, and did not want to, stop aiming for a positive impact on the world.  I probably did more good overall than if I stayed involved in EA during the job search.

I can recommend this to others and my future self in a similar situation.

[Edit: this whole comment makes less sense after Julia's edits. Thanks for helping out with my questions, Julia.]

I'm not trying to be oblivious or facetious, but I don't really understand what it means when Julia and other people say "it's okay to leave EA" or "it's fine to leave if you need to" or conversely for someone else to say, perhaps to themselves "it's not okay to leave EA".  It doesn't feel... concrete enough? For me to make sense of. I want to taboo the words "fine" and "okay" to try to understand better. 

Sometimes EA is hard for me and I want to leave and I'm like "is it fine? Is it okay?" And like, damn,  that seems like a really hard question. 

I directionally agree with "My guess is that if you feel like you’re drowning, you need to disrupt something about your circumstances, and you’ll eventually be more able to do good work (in EA or outside EA) than if you’d continue struggling in the same place.", especially if people have felt like they're drowning for months instead of e.g. hours.

Some things people could interpret this post as meaning:

  • Julia thinks you shouldn't feel bad about yourself if you leave EA (because it wouldn't be healthy or productive). (Idk if this is true, I feel like the fact that I'd be disappointed in myself if I didn't do EA stuff drives me to do actually valuable EA stuff, do we know that self-punishment is always eventually counterproductive?)
  • Julia Wise won't hate you if you leave EA (probably true)
  • Julia Wise wants to send care and warm feelings towards EAs who leave and EAs who are struggling (probably true)
  • Julia Wise thinks that in general, people who want to leave EA probably feel more negative about having that desire than is healthy? Useful? Productive?
  • Julia Wise claims no one will resent you leaving EA if you want to (probably false)
  • Julia Wise thinks EA will be better off if it has a culture of not resenting people who leave EA (probably?)
  • It's guaranteed to not be true that if you leave EA, some sentient beings have a horrible time instead of a good time (probably false)
  • In expectation, more sentient beings will have a good time instead of a horrible time, if you leave EA contingent on you wanting to leave EA (?? sounds like Julia agrees it's unclear)

I think there is a chance you're overcomplicating it a bit 😅. I think she is just trying to create a culture where people don't feel socially anxious about leaving EA if it is good for their mental health. Social norms are present everywhere, including EA, and even if we are quite nerdy and prone to rule-binding, the pressure and expectation to do good can conflate against some of the members' mental health.

And then, she is also saying that everyone should feel they are entitled to choose not to sacrifice (a significant portion of) their happiness to improve the world, and not feel bad about it. People sometimes rationalize it in that it is not sustainable, but I prefer understanding just allowing people to not be maximally altruistic as opposed to maximally efficient with their altruistic budget. As in a similar spirit to https://mindingourway.com/youre-allowed-to-be-inconsistent/

Mau
31
0
0

I don't know, my sense is the earlier comment correctly points to something being a little off here. Like a (not necessarily intentional) motte-and-bailey thing where the bailey (the part that gets defended) is "this community should have a norm of not shaming people for being less-than-maximally altruistic," and the motte is "it's ethically acceptable for people to not act maximally altruistically." But drawing from the latter claim (without justifying/scrutinizing it with ethical arguments) seems like severely devaluing ethical argumentation, which seems pretty questionable both philosophically and as a norm. (It also feels weirdly at odds with the community's usual norm of caring a lot about ethical reasoning/argumentation.) I think the earlier comment does well at teasing apart different versions of "it's fine," a necessary step for noticing the potential motte-and-bailey error.

(I'm still sympathetic to a community norm of letting people leave if they want to; I feel iffy about the community justifying that norm by denying the ethical value of helping others more, or by denying the claim that (if approached in a healthy way) this community has e.g., some ideas that are pretty helpful for doing more good.)

Thanks, this was a helpful prompt. I agree some of this was pretty muddled. I edited to say some more specific things.

I don’t have a lot of the answers about the “best” way to think about it, but +1 to breaking down “it’s fine” or “it’s ok” into component parts. <3 offered for when it feels hard.

"If it would feel like a relief to have someone's permission to go, you have mine."

This resonates and feels important, thanks for including this, Julia! <3

My take: There's two things we mean by EA:

  1. EA the social group / movement. (EA the people)
  2. EA the project of improving the world.


It's always ok to leave 1., and it is ok most of the time, but not always, to leave 2.

If you are leaving 2., you should try to figure out why. If it's because you don't really want to improve the world (like most people), I'd prefer you don't leave - but I'm fine with it as long as you're honest about it. 

IMO though, people are more likely to leave 2. because:

  • They are burned out or otherwise having a rough time. Take some time off and get back on track :)
  • They stopped enjoying "EA the social group". Great, you don't need us to keep improving the world!
  • They have other things that are personally more important to them than improving the world (like kids, family, or ideology about open source software). This is totally fine! People need to have good lives of their own!

I’m excited for your new blog Julia. 🙂

I did this for EA Austria, where I lived for three years. The community wasn't working for me and I found it demotivating to engage with. I kept in contact with people in EA Spain and online.

I like these concrete points. I often worry that people will dislike me for things, so if someone is worrying this I hope it frees them.

  • I personally won't dislike you for leaving.
  • If it would feel like a relief to have someone's permission to go, you have mine.
  • I expect the EA community to be healthier / more vigorous if people who are having a terrible time move away from it.

Thanks for writing this. Great to see people encouraging a sustainable approach to EA!

I want to tell you that taking care of yourself is what’s best for impact. But is it?

I claim that this is true:

  • Finding personal fulfillment is a positive result in and of itself.
  • It's important to prioritize personal needs, otherwise you will not be in a good position to help others (family, friends, charity, etc.).
  • Ensuring one's relationship with EA is sustainable can actually lead to more impact over the long run (though this shouldn't be peoples primary goal, personal wellbeing comes first).
  • Encouraging a sustainable culture can make EA more welcoming to others.

These are all true, but (as Julia alludes to) not necessarily enough to lead us to correctly conclude that the conclusion we really want to believe is the correct one.

(Of course, we don't live in the most inconvenient world, so wanting to believe in a conclusion is only some evidence against the veracity of a conclusion, not necessarily decisive evidence)

[anonymous]4
0
0

This post is great and I really admire you for posting it.

Great post! I've been thinking a lot about this lately. 

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Recent opportunities in Community
46
Ivan Burduk
· · 2m read