I've been spending quite a long time over the past few months reading around strong longtermism, trying to make up my mind on it as a cause area - and the more I read the more confused and undecided I get. I don't think I'm the only one - there are people who've devoted the majority of their career to studying the philosophy and remain undecided[1]. I don't think it makes sense for many people, often who aren't best suited for philosophy, to spend ages going down this rabbit hole.
On the other hand, we need some form of theory of change for working on X-risk. We can't just take 'AI will kill us all' as a good reason for working on X-risk - we need to show that our tiny contribution is more worthwhile than the enormous amounts of good elsewhere. The alarmist rhetoric also leads many rational people to be put-off by EA. This includes many of the most promising people for making a difference that I know, which I think is an absolute tragedy.
So my question is, what is the most basic way to justify X-risk reduction which would make sense for rational people?
Here's a starting point, comparing X-risk reduction to an optimal earning to give strategy. I'm certain there are much better ways though:
- 80,000 hours estimated the average lifetime earnings of a hedgefund trader was $20 million in 2017[2]. If the trader gives half of that, that's $10 million.
- $10 million/$5000 = 2000 expected lives saved across the career.
- 2000/8 billion = 0.000025% of world's population, which is equivalent to the impact of a 0.000025 percentage point change in X-risk (assuming we're considering only current people).
- 0.000025 percentage point reduction in X-risk seems very achievable across a career, especially when I see people plugging in numbers as high as 0.1% change for their expected impact.
Thoughts?