Isaac Dunn

273Joined Jul 2020


I've skimmed this post - thanks so much for writing it!

Here's a quick, rushed comment.

I have several points of agreement:

  • If we could get more people on board with the goal of EA (i.e., making the biggest positive difference they can), then that would be much better than just seeking out people who already have (or nearly have) that goal.
  • So it seems worth investing effort now into figuring out how to get people motivated towards this goal.
  • I agree that the four "reasons why people aren't joining your introductory EA program" you give are true statements (although I'm less sure they're the most important things to focus on)
  • I agree that getting people intrinsically motivated to maximise good seems really valuable if it can be done

But I think I disagree about several important things:

  • I think it's true that doing good is beneficial for one's own life. But I think that the magnitude of your impact matters much less for one's own sense of purpose, self-approval, etc.
    • People can live very purposeful & fulfilling lives by picking a cause; being cause-neutral and trying to maximise your positive impact seems if anything slightly less fulfilling, because it means you'll probably end up working on something that is more neglected and so is less emotionally fulfilling.
  • I think that helping already-altruistic people to realise that they care about the magnitude of their impact seems more promising than trying to help more people to be altruistic. I think that your program is mostly targeted at the second of these.
  • I suspect that the way people can end up with the goal of actually maximising good is more like:
    • Believe that the magnitude of your impact matters, and that bigger is better
    • Feel that have a large impact is achievable
    • Feel that doing the EA project is good for my own purposes (makes me feel fulfilled, etc)
    • Identify as someone that is trying to do the EA project
    • Feel belonging to a social group that is trying to do the EA project

So I think I'm more keen on projects that focus on helping altruistic people to get on board with the EA project.  I'd be very interested in any updates on how your plans go, though!

I agree with the sentiment that ideally we'd accept that we have unchangeable personal needs and desires that constrain what we can do, so it might not "make sense" to feel guilty about them.

But I think the language "that's just silly" risks coming across as saying that anyone who has these feelings is being silly and should "just stop", which of course is easier said than done with feelings! And I'm worried calling feelings silly might make people feel bad about having them (see number 7 in the original post).

I think it's good to make object-level criticisms of posts, but I think it's important that we encourage rather than discourage posts that make a genuine attempt to explore unusual ideas about what we should prioritise, even if they seem badly wrong to you. That's because people can make up their own minds about the ideas in a post, and because some of these posts that you're suggesting be deleted might be importantly right.

In other words, having a community that encourages debate about the important questions seems more important to me than one that shuts down posts that seem "harmful" to the cause.

Thanks for the thoughtful response!

I think when it comes to how you would make your charity more effective at helping others, I agree it's not easy. I completely agree with your example about it being difficult to know which possible hires would be good at the job. I think you know much better than I do what is important to make 240Project go well.

But I think we can use reasoning to identify what plans are more likely to lead to good outcomes, even if we can't measure them to be sure. For example, working to address problems that are particularly large in scale, tractable and have been unfairly neglected seems very likely to lead to better objective outcomes than focusing on a more local and difficult-to-solve problem (read more at

Another relevant idea might be a "hits based" approach, where there's a smaller chance of success, but the successful outcome would be so good that its expected value is better than (say) the best GiveWell-style measurable approach.


To be completely clear, I'm not saying I think you're making a mistake if the reason for focusing on people struggling in the UK is either that you want to help people but don't mind about how big a difference you make (you clearly are helping!), or if you definitely want to work on something you have an emotional connection to. But if your goal is to help other people as best you can, then that's where the EA approach makes a lot of sense :)

Put another way, I completely agree that there are serious problems in all places, including in wealthy countries - but I don't prioritise working on helping people in the UK because (a) I want my efforts to help others as much as possible, (b) it's clear that I can help much more by focusing on other problems and (c) I don't see a reason to prioritise helping people just because they happen to live near me. If you disagree with any of those, I think it's perfectly reasonable to keep focusing on people in the UK! But I think on reflection, many people actually do want to help others as best they can[1].

It is surprisingly emotionally difficult to realise that even though the thing you are working on is hugely important (and EA doesn't at all disagree with that), there are other problems that might deserve your attention even more. It took me a while to come around to that, and I think it is psychologically difficult to deal with the uncertainty of suddenly being open to the possibility of working on something quite different than your old plan.


  1. ^

    One caveat is that although I mostly want to do the EA thing of making the biggest difference possible, I also do separately sometimes want to do something that makes me really feel like I'm making a difference, like volunteering to address a problem near me, and that's obviously fine too, it's just a different goal! We all have multiple goals.

Thank you for writing and sharing this! I suppose it's being downvoted because it's anti-EA, but I enjoyed reading it and understanding your perspective.

I had three main reactions to it:

  1. You obviously care a lot about helping other people and making the world a better place, which I value a lot. You're putting your money where your mouth is and actually taking action to make a difference. That's really admirable.
  2. You seem to think that effective altruism is all about having an objective, measurable metric of effectiveness, and that any attempt to do good that isn't measurable isn't worthwhile. That's not right - one approach that some people within EA take is to look for things with lots of evidence of excellent outcomes (per £), with GiveWell being the most prominent example of this approach. But more generally, EA is just about achieving the best objective outcomes, even if you can't measure them.
  3. I think you're probably making a big mistake, because as you say, there are other things you could be doing that could be helping people even more. As grim as it is to deprioritise the people you've been trying to help, there are many millions who are equally deserving of good things, but who we are in a much better position to help. It's much easier to realise how bad things are when they're right in front of you (in the UK), but there are many problems that are less salient but are equally as important. The EA approach in this situation, where we can't help everyone, is to decide what to prioritise based on what will lead to the best outcomes. Yes, that means 'giving up' on some people who really need and deserve support, but grimly so does any approach - for example your current approach isn't helping any of the many people outside the UK (or future generations, etc.).

I'd be interested in your thoughts!

Sounds excellent! Roughly how large is large?

Thanks for the reply!

If I understand correctly, you think that people in EA do care about the sign of their impact, but that in practice their actions don't align with this and they might end up having a large impact of unknown sign?

That's certainly a reasonable view to hold, but given that you seem to agree that people are trying to have a positive impact, I don't see how using phrases like "expected value" or "positive impact" instead of just "impact" would help.

In your example, it seems that SBF is talking about quickly making grants that have positive expected value, and uses the phrase "expected value" three times.

I think when people talk about impact, it's implicit that they mean positive impact. I haven't seen anything that makes me think that someone in EA doesn't care about the sign of their impact, although I'd certainly be interested in any evidence of that.

When someone learns about effective altruism, they might realise how large a difference they can make. They might also realise how much greater a difference a more diligent/thoughtful/selfless/smart/skilled version of themselves could make, and they might start to feel guilty about not doing more or being better.

Does Kristin have any advice for people that are new to effective altruism about how best to reduce these feelings? (Or advice for the way that we communicate about effective altruism that might prevent these problems?)

Thanks for clarifying! I agree that if someone just tells me (say) what they think the probability of AI causing an existential catastrophe is without telling me why, I shouldn't update my beliefs much, and I should ask for their reasons. Ideally, they'd have compelling reasons for their beliefs.

That said, I think I might be slightly more in favour of forecasting being useful than you. I think that my own credence in (say) AI existential risk should be an input into how I make decisions, but that I should be pretty careful about where that credence has come from.

Load More