I agree with the sentiment that ideally we'd accept that we have unchangeable personal needs and desires that constrain what we can do, so it might not "make sense" to feel guilty about them.
But I think the language "that's just silly" risks coming across as saying that anyone who has these feelings is being silly and should "just stop", which of course is easier said than done with feelings! And I'm worried calling feelings silly might make people feel bad about having them (see number 7 in the original post).
I think it's good to make object-level criticisms of posts, but I think it's important that we encourage rather than discourage posts that make a genuine attempt to explore unusual ideas about what we should prioritise, even if they seem badly wrong to you. That's because people can make up their own minds about the ideas in a post, and because some of these posts that you're suggesting be deleted might be importantly right.
In other words, having a community that encourages debate about the important questions seems more important to me than one that shuts down posts that seem "harmful" to the cause.
Thanks for the thoughtful response!
I think when it comes to how you would make your charity more effective at helping others, I agree it's not easy. I completely agree with your example about it being difficult to know which possible hires would be good at the job. I think you know much better than I do what is important to make 240Project go well.
But I think we can use reasoning to identify what plans are more likely to lead to good outcomes, even if we can't measure them to be sure. For example, working to address problems that are particularly large in scale, tractable and have been unfairly neglected seems very likely to lead to better objective outcomes than focusing on a more local and difficult-to-solve problem (read more at https://80000hours.org/articles/your-choice-of-problem-is-crucial/).
Another relevant idea might be a "hits based" approach, where there's a smaller chance of success, but the successful outcome would be so good that its expected value is better than (say) the best GiveWell-style measurable approach.
To be completely clear, I'm not saying I think you're making a mistake if the reason for focusing on people struggling in the UK is either that you want to help people but don't mind about how big a difference you make (you clearly are helping!), or if you definitely want to work on something you have an emotional connection to. But if your goal is to help other people as best you can, then that's where the EA approach makes a lot of sense :)
Put another way, I completely agree that there are serious problems in all places, including in wealthy countries - but I don't prioritise working on helping people in the UK because (a) I want my efforts to help others as much as possible, (b) it's clear that I can help much more by focusing on other problems and (c) I don't see a reason to prioritise helping people just because they happen to live near me. If you disagree with any of those, I think it's perfectly reasonable to keep focusing on people in the UK! But I think on reflection, many people actually do want to help others as best they can[1].
It is surprisingly emotionally difficult to realise that even though the thing you are working on is hugely important (and EA doesn't at all disagree with that), there are other problems that might deserve your attention even more. It took me a while to come around to that, and I think it is psychologically difficult to deal with the uncertainty of suddenly being open to the possibility of working on something quite different than your old plan.
One caveat is that although I mostly want to do the EA thing of making the biggest difference possible, I also do separately sometimes want to do something that makes me really feel like I'm making a difference, like volunteering to address a problem near me, and that's obviously fine too, it's just a different goal! We all have multiple goals.
Thank you for writing and sharing this! I suppose it's being downvoted because it's anti-EA, but I enjoyed reading it and understanding your perspective.
I had three main reactions to it:
I'd be interested in your thoughts!
Sounds excellent! Roughly how large is large?
Thanks for the reply!
If I understand correctly, you think that people in EA do care about the sign of their impact, but that in practice their actions don't align with this and they might end up having a large impact of unknown sign?
That's certainly a reasonable view to hold, but given that you seem to agree that people are trying to have a positive impact, I don't see how using phrases like "expected value" or "positive impact" instead of just "impact" would help.
In your example, it seems that SBF is talking about quickly making grants that have positive expected value, and uses the phrase "expected value" three times.
I think when people talk about impact, it's implicit that they mean positive impact. I haven't seen anything that makes me think that someone in EA doesn't care about the sign of their impact, although I'd certainly be interested in any evidence of that.
When someone learns about effective altruism, they might realise how large a difference they can make. They might also realise how much greater a difference a more diligent/thoughtful/selfless/smart/skilled version of themselves could make, and they might start to feel guilty about not doing more or being better.
Does Kristin have any advice for people that are new to effective altruism about how best to reduce these feelings? (Or advice for the way that we communicate about effective altruism that might prevent these problems?)
Thanks for clarifying! I agree that if someone just tells me (say) what they think the probability of AI causing an existential catastrophe is without telling me why, I shouldn't update my beliefs much, and I should ask for their reasons. Ideally, they'd have compelling reasons for their beliefs.
That said, I think I might be slightly more in favour of forecasting being useful than you. I think that my own credence in (say) AI existential risk should be an input into how I make decisions, but that I should be pretty careful about where that credence has come from.
I've skimmed this post - thanks so much for writing it!
Here's a quick, rushed comment.
I have several points of agreement:
But I think I disagree about several important things:
So I think I'm more keen on projects that focus on helping altruistic people to get on board with the EA project. I'd be very interested in any updates on how your plans go, though!