Summary: most effective interventions to do good are still roughly as high impact as they were a few years ago. [1] Unfortunately, some people in the EA community don’t feel as happy about the amount of good they can do as they did in the past. This is true even when the amount of good they are doing or can expect to do hasn’t decreased. While I think there are other sources of unhappiness with doing good, I am going to discuss adaptation to an increased expectation of the amount of good we can do as a major contributor to this problem.

The original prompt to do good from the EA community was: did you know that with just giving 10% of your income you can save a life or even multiple per year? But long-termism and the astronomical waste argument have shifted the community towards expecting to personally be able to accomplish a much larger amount of good. Anything less feels insufficient to many. The community and the individuals within have adapted to this higher expectation. For the majority of people, this expectation to do an existential amount of good does not materialise, and so they feel disappointed.

But that’s silly. As a first example, we can still save lives with only a small fraction of our income. Saving lives has not become any less tremendously important. Over 200,000 children under 5 still die of malaria each year. I am concerned that as people have become disappointed with not living up to their hopes of possibly saving billions of lives or fundamentally shaping the far future, they become disappointed with their ability to do good in general and give up. Nobody should give up for this reason. You can still do an amazing amount of good by saving lives.

The same is true for other ways to do good. Factory farming is as big an issue as it was a few years ago, with dozens of billions of animals living in factory farms under dreadful conditions. Becoming vegetarian still saves over a dozen land animals in expectancy per year from suffering and death. The same is true in areas outside of EA’s traditional causes. If you have been a regular blood donor or working on solar panels, your efforts produce roughly as much value as they did in the past. Having learnt tools from the EA community to quantify these efforts doesn’t change the bottom line of actual impact, it just helps prioritising between options.

This equally applies to work on long-termist problems. People working on AI Safety or biorisk might have had the hope to make critical contributions that might fundamentally shape the future, but reality shows these problems to be very hard. Most people working on them will only make a small contribution towards solving them and that can feel disappointing. But many of these small marginal contributions are necessary.

Remember that the argument for long-termism is that people might be able to have more impact by focussing on global catastrophic risks or by shaping the long term future in some other way. Whether you agree with this premise or not, the argument for long-termism is not that you will have less impact in total by saving lives or other interventions now than previously assumed. This means that fighting factory farming and other do gooding efforts are as good and important as they ever were.

In some sense, this is obviously true. Yet I do not have the impression that this feels true to people. If saving lives and other do gooding efforts now feel less good to you than they did when you first heard about EA, that probably means you have adapted to expecting to do a lot more good now. That’s terrible!

Participating in the EA community should make you feel more motivated about the amount of good you are able to do, not less. If it makes you feel less motivated on balance, then the EA community is doing something fundamentally wrong and everybody might be better off somewhere else until this is fixed.

If you have adapted to the belief that you can personally prevent lots of astronomical waste, it is time to go back to having more realistic expectations.

I am not sure how to revert this adaptation on a community wide level. I hope that reminding people of opportunities like most of us being able to save dozens of lives in our lifetime is a good start.

On an individual level, you can also try the ordinary weapons against adaptation like keeping a gratitude journal.

Think about all the ways you can have an amazing impact that are actually available to you personally. There are quite a lot. Saving lives via donations still trumps many other opportunities in terms of impact, but there are yet more ways of doing good worth considering, some of which are harder to quantify. Most ‘direct work’ options fall in this category - working on important problems in government, academia or the non-profit sector. You can also do the more effective forms of volunteering.

It is not clear to me to what extent the rise of long-termism in the EA community is why people insufficiently appreciate the high impact they are already able to have, whether that is via donating or working directly on important or less important problems. I’m not sure the answer matters. Maybe there are other people who are able to have an even higher impact than you. But that doesn’t change the amount of good you can do.

Don’t forget to still aim for as much good as you can. This post is reminding you to do the most good you can personally do and telling you about ways to feel better about this particular amount, not telling you to aim for anything less.

Try to have realistic expectations about how much good you can do and get satisfaction from that. There are lots of important problems left. There are as big and as important as they ever were, and the world needs your contributions just as much as before.

Thanks to AGB for helpful suggestions for this post.


  1. This post discusses the fact we can still do as much good as we could a few years ago. Please note however that this is only true to a first approximation. Global efforts to tackle important problems have been working and many problems have actually become a bit smaller in the past few years, just not enough to change the main argument. ↩︎

Comments26
Sorted by Click to highlight new comments since: Today at 1:06 PM

I disagree with the common framing that saving lives and so on constitute one straightforward, unambiguous way to do good, and that longtermism just constitutes or motivates some interventions with the potential to do even more good.

It seems to me (and I'm not alone, of course) that concern for the long term renders the sign of the value most of the classic EA interventions ambiguous. In any event, it renders the magnitude of their value more ambiguous than it is if one disregards flow-through effects of all kinds. If

  • accounting for long term consequences lowers the expected value (or whatever analog of expected value we use in the absence of precise expectations) of classic EA interventions, in someone's mind, and
  • she's not persuaded that any other interventions--or, any she can perform--offer as high (quasi-)expected value, all things considered, as the classic EA interventions offer after disregarding flow-through effects,

then I think it's reasonable for her to feel less happy about how much good she can do as she becomes more concerned about the long term.

For the record, I don't know how common this feeling is, or how often people feel more excited about their ability to save lives and so on than they did a few years ago. One could certainly think that saving lives, say, has even more long-term net positive effects than short-term positive effects. I just want to say that when someone says that they feel less excited about how much good they can do, and that longtermism has something to do with that, that could be justified. They might just be realizing that doing good isn't and never was as good as they thought it was.

Yep, I agree that if i) you personally buy into the long-termist thesis, and ii) you expect the long-term effects of ordinary do gooding actions to be bigger than short-term effects, and iii) you expect these long-term effects to be negative, then it makes sense to be less enthusiastic about your ability to do good than before.

However, I doubt most people who feel like I described in the post fall into this category. As you said, you were uncertain about how common this feeling is. Lots of people hear about the much bigger impact you can have by focussing on the far future. Significantly fewer are well versed in the specific details and practical implications of long-termism.

While I have heard about people believing ii) and iii), I haven't seen either argument carefully written up anywhere. I'd assume this is true for lots of people. There has been a big push in the EA community to believe i), this has not been true for ii) and iii) as far as I can tell.

If I'm not misunderstanding you, being less enthusiastic than before just requires (i) (if by "the long-termist thesis" we mean the moral claim that we should care about the long term) and (iii). I don't think that's a lot of requirements. Plus, this is all in a framework of precise expectations; you could also just think that the long-term effects are ambiguous enough to render the expected value undefined, and endorse a decision theory which penalizes this sort of ambiguity.

My guess is that when people start thinking about longtermism and get less excited about ordinary do-gooding, this is often at least in part due either to a belief in (iii) or, more commonly, to the realization of the ambiguity, even when this isn't articulated in detail. That seems likely to me (a) because, anecdotally, it seems relatively common for people raise concerns along these lines independently after thinking about this stuff for a while and (b) because there has been some push to believe in this ambiguity, namely all the writing on cluelessness. But of course that's just a guess.

In principle you only need i) and iii), that's true, but I think in practice ii) is usually also required. Humans are fairly scope insensitive, and I doubt we'd see low community morale from ordinary do gooding actions being less good by a factor of two or three. As an example, historically GiveWell estimates of how much saving a life with AMF costs have differed by about this much - and it didn't seem to have much of an impact on community morale. Not so now.

Our crux seems to be that you assume cluelessness or ideas in the same space are a large factor in producing low community morale for doing good. I must admit that I was surprised by this response, I personally haven't found these arguments to be particularly persuasive, and most people around me seem to feel similarly about such arguments, if they are familiar with them at all.

I don't know if there is lower community morale of the sort you describe--you're better positioned to have a sense of that than I am--but to the extent that there is, yes, it seems we disagree about whether to suspect that cluelessness would be a significant factor.

It would be interesting to include a pair of questions on the next EA survey about whether people feel more or less charitably motivated than last year, and, if less, why.

I personally haven't found these arguments to be particularly persuasive, and most people around me seem to feel similarly about such arguments, if they are familiar with them at all.

Have you written somewhere about why you don't find cluelessness arguments to be particularly persuasive?

No, I haven't. Given the amount of upvotes Phil's comment received (from which I conclude a decent fraction of people do find arguments in this space demotivating which is important to know) I will probably read up on it again. But I very rarely write top-level posts and the probability of this investigation turning into one is negligible.

Got it.

Perhaps a few bullet points in a comment if there's no space for a top-level post (better written quickly than not at all...)


Hi Milan,

This was now quite a while ago but I have spent some time trying to figure out why I don't find cluelessness arguments persuasive. After we spent a bunch of time deconfusing ourselves, Alex has written up almost everything I could say on the subject in a long comment chain here.

Thanks... I replied on that thread.

Through thinking about these comments, I did remember an EA Forum thread in which ii) and iii) were argued about from 4 years ago: https://forum.effectivealtruism.org/posts/ajPY6zxSFr3BbMsb5/are-givewell-top-charities-too-speculative

It's worth reading the comment section in full. Turns out my position has been consistent for the past 4 years (though I should have remembered that thread!).

While I have heard about people believing ii) and iii), I haven't seen either argument carefully written up anywhere. I'd assume this is true for lots of people.

Agreed - would love to see this written up by someone.

It seems to me (and I'm not alone, of course) that concern for the long term renders the sign of the value most of the classic EA interventions ambiguous.

I expanded on this here: What consequences?

I think that personally, I'd mostly advocate for attempts to decouple motivation from total impact magnitude, rather for attempts to argue that high impact magnitude is achievable, so to speak, when trying to improve motivation.

If you attach your motivations to a specific magnitude of "$2,000 per life saved", then you can expect them to fluctuate heavily when estimates change. But ideally, you would want your motivations to stay pretty optimal and thus consistent for your goals. I think this ideal is somewhat possible and can be worked towards.

The main goal of a consequentialist should be to optimize a utility function, it really shouldn't matter what the specific magnitudes are. If the greatest thing I could do with my life is to keep a small room clean, then I should spend my greatest effort on that thing (my own wellbeing aside).

I think that most people aren't initially comfortable with re-calibrating their goals for arbitrary utility function magnitudes, but I think that learning to do so is a gradual process that could be learned, similar to learning stoic philosophy.

It's similar to learning how to be content no matter one's conditions (aside from extreme physical ones), as discussed in The Myth of Sisyphus.

https://en.wikipedia.org/wiki/The_Myth_of_Sisyphus

Unfortunately, some people in the EA community don’t feel as happy about the amount of good they can do as they did in the past. This is true even when the amount of good they are doing or can expect to do hasn’t decreased [...] I am not sure how to revert this adaptation on a community wide level. 

What makes you think that this feeling has become at least somewhat prevalent within the community, beyond one or two people? Just personal experience?

Ordinarily, I'd expect to see a "baseline" where some people feel happier/more motivated over time, others feel less happy/motivated, and the end result is something of a wash. I read a lot of EA material and talk to a lot of people, and I haven't gotten the sense that people are less motivated now, but my impressions could differ from yours for many reasons, including:

  • We're just talking to different people
  • I'm getting the wrong impression of people I talk to
  • I didn't see data from a survey or something like that
  • There are confounding effects that disguise "feeling less happy about one's potential to do good" (e.g. "feeling more happy about being part of the EA community as it grows and matures")

I've been involved in the community since 2012 - the changes seem drastic to me, both based on in-person interactions with dozens of people as well as changes in the online landscape (e.g. the EA Forum/EA Facebook groups).

But that is not in itself surprising. The EA community is on average older than when it started. Youth movements are known for becoming less enthusiastic and ambitious over time, when it turns out that changing the world is actually really, really hard.

A better test is: how motivated do EAs feel who are of a similar demographic to long-term EAs years ago when EA started? I have the impression they are much less motivated. It used to be a common occurrence in e.g. Facebook groups to see people writing about how motivating they have found it to be around other EAs. This is much rarer than it used to be. I've met a few new-ish early 20s EAs and I don't think I can even name a single one who is as enthusiastic as the average EA was in 2013. I wonder whether the lack of new projects being started by young EAs is partially caused by this (though I am sure there are other causes).

To be clear, I don't think there has been as drastic a change since 2018, which is I think when you started participating in the community.

Thanks for sharing more details on your perspective.

For context, I've been following GiveWell since 2012 and took the Giving What We Can pledge + started Yale's EA group in 2014. But I wasn't often in touch with people who worked at EA orgs until 2017.

My job puts me in touch with a lot of new people (e.g. first-time Forum posters, people looking to get into EA work), and I find them to be roughly as enthusiastic as the student group members I've worked with. But that's often tempered by a kind of conservatism that seems to come from EA messaging -- they're more concerned about portraying ideas poorly, accidentally causing harm through their work, etc. 

This may apply less to more experienced people, though I wonder how much of the feeling of "insufficiency" is closer to a feeling of deeper uncertainty about whether the person in question is focusing on the right things, given the number of new causes and ways of thinking about EV that have become popular since the early years.

Overall, I think you're better-positioned to make this evaluation than I am, and I'm really glad that this post was written.

I agree that absolute impact is the better way of looking at this. You talk about the original pitch of EA of donating 10% of your salary and saving quite a few lives. But now that same person can donate the same amount of money to the long-term future and potentially save orders of magnitude more lives in expectation. So I think EA has gotten more exciting. I could see if someone has inflexible career capital in the global poverty or animal space and little ability to donate and became convinced of the value and tractability of the long-term future, that this could decrease one's relative impact. But I think this is less common than the case of being able to pivot (at least somewhat) towards higher impact. So I think a change in enthusiasm is more related to general trends with age and movements, rather than a change in perception of relative impact.

I've been in the community since about 2011, and I've also noticed this happening in myself and quite a few others who have been in the community for a long time. I'm not aware of any data on the subject. Denise's explanation of this and this post sounds right to me.

I would guess that many feel small not because of abstract philosophy but because they are in the same room as elephants whose behavior they can not plausibly influence. Their own efforts feel small by comparison. Note that this reasoning would have cut against originally starting GiveWell though. If EA was worth doing once (splitting away from existing efforts to figure out what is neglected in light of those existing efforts), it's worth doing again. The advice I give to aspiring do-gooders these days is to ignore EA as mostly a distraction. Getting caught up in established EA philosophy makes your decisions overly correlated with existing efforts, including the motivation effects discussed here.

This is interesting. Do you feel that motivation is a bigger factor for you in this advice as opposed to increasing the variance of efforts for doing good as a way of doing more good?

I am not sure in what contexts you give this advice, but I worry that in some cases it might be inappropriate. Say in cases where people's gut feelings and immediate intuitions are clearly guiding them in non-effective altruistic directions.

I'd prefer a norm where people interested in doing the most good would initially delegate their decisions to people who have thought long and hard on this topic, and if they want to try something else they should elicit feedback from the community. At least, as long as the EA community also has a norm for being open to new ideas.


Fair point. It's mostly been in the context of telling people excited about technical problems to focus more on the technical problem and less on meta ea and other movement concerns.

tl:dr Telling people working on near term causes isn't very effective makes it less fun for them to work on those causes. This could be a problem if they can't convince themselves to work on long term causes.


My impression is the community as a whole or at least prominent figures/organizations in the EA movement have shifted their focus in the last couple of years. They have more or less explicitly said that working on x- or s-risks is much more important than working on global poverty etc. See for example Wills last TED talk, the 80,000 Hours job board ("Top recommended problems" vs. "Other pressing problems"), this survey: https://youtu.be/SpPvPve4qao?t=450, the change of priorities of the EA-Foundation...


Not everyone was able to go along. Some people just have problems to get excited about x- or s-risks. It doesn't trigger the emotional reaction like global poverty or factory farming. Obviously those people could still keep working on those cause areas but after they are told from people they highly respect, that this is probably less than 5% as effective as working on x- or s-risks, it might not be as much fun as it used to be. It might even feel wrong to consider oneself an effective altruist if you work on something that the/some leaders of the movement obviously don't consider very important (in comparison). I actually wonder myself why global poverty is still considered an EA cause. Is there any EA (besides maybe Michael Plant) who argues that it could even come close with regards to effectiveness when compared to x- or s-risks? Is the only reason for keeping near term cause areas to get new people into the movement or because of possible long term effects (increase the circle of concern... )? If we keep cause areas considered to be 1/20 as effective as the top cause areas, why not add new cause areas that are considered 1/20 as effective as global poverty?


Ok, I guess I might have lost the thread already, so I better stop rambling...

This is wonderful. Thanks for writing it!

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

In this piece, Denise argues that many people involved with EA are unsatisfied because they’ve developed high expectations about how much good they’ll be able to do — expectations which, when upset, lead to a lack of motivation. 

While the scope of this problem isn’t clear to me, I do think the essay was beautifully written, and it struck a chord with many readers. There are several lines I expect to reference well into the future:

“Maybe there are other people who are able to have an even higher impact than you. But that doesn’t change the amount of good you can do.”

““Participating in the EA community should make you feel more motivated about the amount of good you are able to do, not less. If it makes you feel less motivated on balance, then the EA community is doing something fundamentally wrong and everybody might be better off somewhere else until this is fixed.”