I think of longtermism as a type of Effective Altruism (EA). I’ve seen some people talking about longtermism as (almost) an alternative to EA, so this is a quick statement of my position.

EA says to allocate the total community budget to interventions with the highest marginal expected value. In other words, allocate your next dollar to the best intervention, where 'best' is evaluated conditional on current funding levels. This is important, because with diminishing marginal returns, an intervention's marginal expected value falls as it is funded. So the best intervention could change as funding is allocated.

Longtermism says to calculate expected value while treating lives as morally equal no matter when they occur. Longtermists do not discount the lives of future generations. In general, calculating the expected value of an action over the entire potential future is quite difficult, because we run into the cluelessness problem, where we just don't know what effects an action will have far into the future. But there is a subset of actions where long-term effects are predictable: actions affecting lock-in events like extinction or misaligned AGI spreading throughout the universe. (Cluelessness seems like an open problem: what should we do about actions with unpredictable long-term effects?)

Longtermist EA, then, says to allocate the community budget according to marginal expected value, without discounting future generations. Given humanity's neglect of existential risks, the interventions with the highest marginal expected value may be those aimed at reducing such risks. And even with diminishing returns, these could still be the best interventions after large amounts of funding are allocated. But longtermist EAs are not committed only to interventions aimed at improving the far future. If a neartermist intervention turned out to have the highest marginal expected value, they would fund that, and then recalculate marginal expected value and reassess for the next round of funding allocation.

7

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since: Today at 9:03 AM

I'm not sure who is saying longtermism is an alternative to EA but it seems a bit nonsensical to me as longtermism is essentially the view that we should focus on positively influencing the longterm future to do the most good. It's therefore quite clearly a school of thought within EA.

Also I have a minor(ish) bone to pick with your claim that "Longtermism says to calculate expected value while treating lives as morally equal no matter when they occur. Longtermists do not discount the lives of future generations."  Will MacAskill defines longtermism as the following:

Longtermism is the view that positively influencing the longterm future is a key moral priority of our time.

There's nothing in this definition about expected value or discounting. I will plug a post I wrote which explains that it has been suggested one can get a longtermist conclusion using a different decision theory than maximising expected value, just as one may still get a longtermist conclusion if one discounts future lives.
 

I think there is a very clear split, but it's not over whether people want to do the most good or not. I would say the real split is between "empiricists" and "rationalists", and it's about how much actual certainty we should have before we devote our time and money to a cause. 

The thing that made me supportive of EA was the rigorous research that went into cause areas. We have rigorous, peer-reviewed studies that definitively prove that malaria nets save lives. There is a real, tangible empirical proof that your donation to a givewell cause does real, empirical good. There is plenty of uncertainty in these cause areas, but they are relatively bounded by the data available. 

Longtermism, on the other hand, is inherently built on shakier grounds, because you are speculating on unbounded problems that could have wildly different estimates depending on your own personal biases. Rationalists think you can overcome this by thinking really hard about the problems and extrapolating from current experience into the far future, or into things that don't exist yet like AGI. 

You can probably tell that I'm an empiricist, and I find that the so called "rationalists" have laid their foundations on a pile of shaky and questionable assumptions that I don't agree with.  That doesn't mean I don't care about the long term, for example climate change risk is very well studied. 

Curated and popular this week
Relevant opportunities