Denise Melchin hypothesizes that interventions to do good haven't changed radically, but EAs still had worse emotional health and happiness in this endeavor. One of the reasons she suspects may be driving this is unrealistic expectations of saving the entire future or preventing astronomical waste has become internalized among many EAs. She thinks it leads to ignoring other ways to do good. Conditional on the harm actually happening, I will show the rest of this post.

My opinion: This is an interesting post, and I agree with this post at least somewhat and provide my own reasons for why this might have happened. I also think this is a clear example of a memetic downside with longtermism, yet I still think it was correct to air longtermism out despite the psychological harm it may have caused. To lay out my agreements first:

  1. I agree that the vast majority of people have severely miscalibrated notions of how much good they can realistically do, with certain exceptions (Eliezer Yudkowsky, Holden Karnofsky, Ajeya Cotra et al probably do deserve awards for potentially protecting the long-term future.) And that's despite thinking the probability of our century being the hinge of history is probably in the 30-60% range.

  2. There's still a lot of opportunities to do good, and that matters. Heck, even basic Earning to Give is a good start to impact. This is often not noticed in the buzz over longtermism, but it matters.

  3. That people should still realize just how much ordinary problems matter.

One larger point: Unlike another post by Denise Melchin on "My Mistakes on the path to impact" which is linked here:

https://forum.effectivealtruism.org/posts/QFa92ZKtGp7sckRTR/my-mistakes-on-the-path-to-impact?commentId=4HsQcjhb9B5vWiBC5#comments

Where I'd estimate 80% of the problems was due to herself, the opposite occured here, where I'd blame the EA community about 80%. Yes, people unrealistically estimated their impact, but longtermism/AI Safety shifts was the spark here. Now don't get me wrong, I still think it should have been aired, and that it would be unreasonably privileging psychological health when a non-trivial chance of X-risk would happen, but I do think it at least should be mentioned.

Finally, I think there's another reason this happened, and that has to do with the fact that we prioritize relative over absolute differences. In other words, we don't care nearly as much as a 10x increase of our money as we do about comparing ourselves to those 10x richer. And longtermism/AI Safety, with it being 10-20 orders of magnitude more useful than most other causes, set off our relative status comparison alarms, even though the universe is not obliged to make causes equal or nearly equal.

So it's not easy for our brains to tolerate that inequality, and yet that is probably the most important thing any EA should know.

One last point, this time as a link. EA is not a competition. Link here:

https://forum.effectivealtruism.org/posts/D4guDdwgQcMtgb5Tm/effective-altruism-is-not-a-competition

4

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since: Today at 8:52 AM

Minor note: Denise is a she

Thank you! Will edit the post.

Curated and popular this week
Relevant opportunities