1. Not wanting to move countries 
    (“there would be a lot more effective work options if I lived elsewhere”)
  2. Wanting a permanent work contract
    (“there would be a lot more effective work options with temporary contracts or grant-based pay”)
  3. Not wanting to be an independent researcher
    (“it could potentially be an effective thing to do, and I wouldn’t have to worry about replaceability”)
  4. Wanting to have a child
    (“if I didn’t want one I’d probably be much more flexible on the points above”) 
  5. Wanting to take some time off from work to take care of said child, in case I ever manage to have one
    (“although if I’m not having an impactful job by that time it probably won’t matter much anyway”)
  6. Burning out
    (“it wastes time and sets a bad example”)
  7. Feeling guilty about things
    (“I have read Replacing Guilt but I’m still having all these unproductive feelings”)

(Despite feeling guilty I’m doing ok – ultimately, a lot of this is just sadness about not having an unlimited altruism budget. I wanted to post this because I like reading about others’ experiences and thought someone else might like reading about mine. I don’t really need solutions proposals but if you have some, other readers might benefit from them.)

110

1
0

Reactions

1
0
Comments9


Sorted by Click to highlight new comments since:

I felt a lot of this when I was first getting involved in effective altruism. Two of the things that I think are most important and valuable in the EA mindset -- being aware of tradeoffs, and having an acute sense of how much needs to get done in the world and how much is being lost for a lack of resources to fix it -- can also make for a pretty intense flavor of guilt and obligation. These days I think of these core elements of an EA mindset as being pieces of mental technology that really would ideally be installed gradually alongside other pieces of mental technology which can support them and mitigate their worst effects and make them part of a full and flourishing life.

Those other pieces of technology, at least for me, are something like:

  • a conviction that I should, in fact, be aspiring to a full and flourishing life; that any plan which doesn't viscerally feel like it'll be a good, satisfying, aspirational life to lead is not ultimately a viable plan; that I may find sources of strength and flourishing outside where I imagined, and that it'd fine if I have to be creative or look harder to find them, but that I cannot and will not make life plans that don't entail having a good life.
  • a deep comfort with my own values, some of which are altruistic and some of which are selfish, and with my own failings as a person; the ability to look at myself and see a lot of shortcomings and muddled thinking and mistakes and ways I've hurt people and to nonetheless feel love and pride for myself. For me, at least, the reason it hurt to notice I had selfish values was very close to the reason it hurt to notice I'd made a mistake or handled a situation poorly; I had a lot of my self-esteem and my conviction I deserved to be happy and to be loved tied up in high expectations of myself. But of course it's very damaging to your altruistic endeavors, and to your personal growth, to be unwilling to look at yourself the way you truly are, or to love yourself only for things you won't always live up to, so I'm actually much stronger and better once I deeply internalized that I am flawed, and that I am selfish, and that I am incoherent and muddled in many ways, and that this is also true of all other humans and we all remain deserving of good lives all the same. 
  • a sense that I am better and a better EA when I'm stronger and happier; that depression and burnout genuinely sap my productivity and my creativity and affect my epistemics; that miserably dragging myself across the finish line actually produces worse results than living a life I take pride in and enjoy deeply; an appreciation for just how much I'm capable of when I'm happy and love my life and love the people around me and love the work I do and don't have to fight with myself to focus or prioritize.
  • a healthier relationship to my own motivational system: I used to do a lot of what I think of as 'dragging my brain across sharp rocks' to get stuff done. The stuff was aversive; I didn't want to do it; I hated doing it; I forced myself to do it anyway. This changed how I related to all kinds of tasks, even one that didn't have to be aversive. I thought of 'intrinsic motivation' as basically willpower, the willingness to make myself hurt to get things done. It was hard to imagine doing things out of an uncomplicated, not-internally-coercive interest in making them happen. It took me a long time, and I had the luxury of a home environment and job that made it possible, but I flat-out don't do that anymore. I do things when I want to do them; when it would take internal coercion and 'dragging my brain over rocks' to do things, I don't do them. (I allow myself to make myself start a thing for a few seconds, to see if it just needed activation energy, but I don't force myself through things that require ongoing internal making-myself). And it turns out that once I have some trust that doing things won't be unpleasant and aversive, I do plenty of things, and it's more achievable to add new things. 



For me, this has taken a decade. I don't think I was particularly good at it, I don't know that I made all the right tradeoffs in doing it, and I hope it's faster and better for other people. But I do want people to know that there's a way of living your values that doesn't feel fueled by guilt, that it's possible to be an EA and have a life you just love, and that you should absolutely be aiming to be your strongest and best self rather than the version of yourself who sacrificed the most. 

For me, I have:

Not wanting to donate more than 10%.
("There are people dying of malaria right now, and I could save them, and I'm not because...I want to preserve option value for the future? Pretty lame excuse there, Jay.")

Not being able to get beyond 20 or so highly productive hours per week.
("I'm never going to be at the top of my field working like that, and if impact is power-lawed,  if I'm not at the top of my field, my impact is way less.")

Though to be fair, the latter was still a pressure before EA, there was just less reason to care because I was able to find work where I could do a competent job regardless, and I only cared about comfortably meeting expectations, not achieving maximum performance.

Hey Jay,

Over the years, I have talked to many very successful and productive people, and most do, in fact, not work more than 20 productive hours per week. If you have a job with meetings and low-effort tasks in between, it's easy to get to 40 hours plus. Every independent worker who measures hours of real mental effort is more in the 4-5 hours per day range. People who say otherwise tend to lie and change their numbers if you pressure them to get into the detail of what "counts as work" to them. It's a marathon, and if you get into that range every day, you'll do well.

I totally agree and I think EA should be less totalizing.

EA indirectly asks us to devalue our own direct communities in order to more effectively help others globally. For most people, this creates a big problem.

I want to see more focus on a version of EA for Normal People.

My experience: I’ve been living in the south of Japan for six years. It’s far from the most effective place to be. I gave up on being an independent researcher. I wanted financial stability and got a permanent work contract. If things go well, I’m going to have children soon. (Will MacAskill encourages people to have children in What We Owe the Future, by the way.) I don’t feel that guilty because I’m still a lot more effectively altruistic than the vast majority of people who would have the means.

I'm interested in building a career around technological risks. Sometimes, I think about all the problems we're facing. From biosecurity risks to AI safety risks to cybersecurity risks to ... And it all feels so big and pressing and hopeless and whatnot. 

Clearly, me being existential about these issues wasn't helping anyone. So I've had to accept that I have to pick a smaller, very specific problem that my skillset could be useful in. Even if it's not solving everything, I won't solve anything if I don't specialise in that way. 

Maybe some spirit of that could also apply to the altruistic actions I take in general? Ie. I have to start by going vegan OR setting up regular donations OR working towards a more flexible career OR thinking about whether I want kids OR ... I can't take on all those things at once. 

I suppose the simple way I might remind myself of that is "Altruism requires one step at a time and not every altruistic person needs to generalise in all methods of being altruistic."

Bob
-12
1
4
Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
Recent opportunities in Building effective altruism
49
Ivan Burduk
· · 2m read