Hide table of contents

Listening to a few episodes of the 80,000 hours podcast, a recurring theme is the value of one's time. A certain amount of money is better invested on one's comfort than on giving to EA causes, or so the argument goes. Money spent on business class air plane tickets can potentially improve sleep, which can save time or lead to better productivity. The utility curve is asymptotic and at some point the benefit of investing in comforts approaches the gain from investing in EA causes.

 

If we now think about the most influential effective altruists who are still at the relative beginning of their career and potentially have decades ahead of them, it is easy to imagine that they could potentially impact billions or trillions of lives (certainly if we're talking about longtermists). Now imagine scenarios, where such a person gets ill and their life can only be saved through organ transplants or terribly expensive treatments which for whatever reason they are not entitled to. I'm not sure this has happened yet but it is bound to happen.

 

I could be mistaken, but as far as I'm aware, the EA movement doesn't (yet) put a value on people's life based on their potential to further EA causes. I could see a future where it does though and is confronted with a scenario like the one I outlined. After taking into account all arguments, including the possibility that they could turn out to be the next SBF. From an EA point of view, is their life worth saving more than any other person's life without considering measures like WALY/QALY, and if so should their be an upper limit?

6

0
0

Reactions

0
0
New Answer
New Comment
Comments1
Sorted by Click to highlight new comments since: Today at 8:38 AM

Yes, if such a farfetched situation ever occurred (a productive EA suddenly confronted with the need for expensive medical treatment, but for some reason they couldn't raise enough cash from their personal friends/family/etc), it could make sense for an EA organization to give them a grant covering some of the cost of the treatment, just as people sometimes receive grants to boost their productivity in other ways -- allowing them to hire a research assistant, or financing a move so they can work in-person rather than remotely, or buying an external monitor to pair with their laptop, or etc.

But I don't think this would realistically extend to ever crazier and crazier levels, like spending millions of dollars on lifesaving medical treatment.  First of all, there are few real-life medical procedures that cost so much, and fewer still which are actually effective at prolonging healthy life by decades.  (On average, half of all US healthcare costs are incurred in the last 6 months of people's lives -- people are understandably very motivated to spend a lot of money for even a small increase in their own lifespan while fighting diseases like cancer, but it wouldn't make sense altruistically to spend exorbitant sums on this kind of late-stage care for random EAs.)

Furthermore, even if you believe that longtermist work is super-duper-effective, such that each year of a longtermist's research/effort is saving thousands or millions of lives in expectation...  (Personally, I am a pretty committed longtermist, but nevertheless I doubt that longtermism is bajillions of times more effective than everything else -- its big advantages on paper are eroded somewhat by issues like moral cluelessness, lack of good feedback loops, the fact that I am not just a total hedonic utilitarian that linearly values the creation of more and more future lives, etc.)

...even if you believe that longtermism is super-duper-effective, it still wouldn't be worth paying exorbitant sums to fund the medical care of individual researchers, because it would be cheaper to simply pay the salaries of newly hired, replacement researchers of equivalent quality!  So, in practice, in a maximally dramatic scenario where a two-million-dollar medical treatment could restore a dying EA to another 30 years of perfect health and productivity, a grantmaking organization could weigh it against the alternative option to spend the same money on funding 30 years of research by other people at $70,000/year.

Of course if someone was especially effective and hard-to-replace (perhaps we elect an EA-sympathetic politician to the US Senate, and they are poised to introduce a bunch of awesome legislation about AI safety, pandemic preparedness, farmed animal welfare, systemic reform of various institutions, etc, but then they fall ill in dramatic fashion), the numbers would go up.  But they wouldn't go up infinitely -- there will always be the question of opportunity cost, and what other interventions could have been funded.  (Eg, I doubt there's any single human on Earth who we could identify as having such a positive impact on the world that it would be worth diverting 100% of OpenPhil's longtermist grantmaking for an entire year, to save them from an untimely death.)

Curated and popular this week
Relevant opportunities