Hi all, new to the forum, longtime observer in the wings. I've been trying to translate my anti-longtermist intuitions into meaningful arguments and would appreciate some feedback.
One promising line of thought, I think, involves considering the ratio of suffering to the amount of care available to deal with it.
As phrased in the Tube ad for What We Owe the Future, longtermism weights the damaging consequence of our actions equally regardless of their location in time. A cut foot is a cut foot whether it happens today or a hundred years in the future. However, I don't think the reverse is true. Preventing a cut foot today may be much more valuable than preventing one a hundred years from now, because if we don't do it now, no-one will. Whereas our future selves and our descendants have a hundred years to work on the future problem.
In other words, we should weight suffering today much more highly than suffering in the future, because only we can do something about it.
Moreover, we might plausibly guess that the proportion of humans living in extreme suffering will continue to decrease over time. And, as EA continues to gather momentum, more people will allocate spare income, wealth, and time to EA-linked causes, or to altruism and philanthropy more generally. These twin factors may mean that (setting x-risks temporarily aside), the ratio of "care" available to each hypothetical unit of suffering may grow enormously over time.
If that is true, then we should greatly prioritise eliminating suffering today and in the near future, and leave most of the caring for future generations to future generations.
I realise the argument depends on a host of unarticulated & tested premises, not least setting aside x-risks, which I think needs to be considered separately. But also keen to hear whether it's all been said, or dealt with, before? Does it ring true to you?
Hey Aron, thanks for your post!
This can be framed in terms of both the importance, tractability and neglectedness (ITN) framework and the significance, persistence and contingency (SPC) framework.
Using the ITN framework, you might argue that suffering that occurs in the present is more neglected than suffering in the distant future because fewer people will ever be in a position to address it (only the present generation). By contrast, everyone from now until a given point in the future will be in a position to address suffering that occurs at that point in time. It is also more tractable because we can address it directly, whereas we can only address suffering in the future indirectly, e.g. by empowering future generations to address it when it occurs. (These considerations weigh against each other, though.)
Using the SPC framework, you might argue that suffering in the distant future is not very contingent on our actions in the present because people in the future will be able to address it regardless of what we do now.
These points are not fatal to longtermism, though. The idea that future people will be better positioned to address future problems is the basis of patient longtermism, "the view that individuals can have a greater positive impact by investing current altruistic resources and spending them later than by spending them now."
Overall agreed, except that I'm not sure the idea of patient longtermism does anything to defend longtermism against Aron's criticism? By my reading of Aron's post, the assumptions there are that people in the future will have a lot of wealth to deal with problems of their time, compared to what we have now—which would make investing resources for the future (patient longtermism) less effective than spending them right away.
I think your point is broadly valid, Aron: if we knew that the future would get richer and more altruistically-minded as you describe, then we would want to focus most of our resources on helping people in the present.
But if we're even a little unsure—say, there's just a 1% chance that the future is not rich and altruistic—then we might still have very strong reason to put our resources toward making the future better: because the future is (in expectation) so big, if there's anything at all we can do to influence it, that could be very important.
And to me it seems pretty clear that the chance of a bad future is quite a bit more than 1%, which further strengthens the case.
This might happen but is not guaranteed.
Assume there is X suffering today and 100X suffering in the future in the worst case.
Assume there is a 90% chance that either the future does not contain 100X future suffering, or that people in the future invest in solving that suffering.
Now if you had to choose between increasing this probability to 91%, this as good as solving all present suffering as per EV calculations.
We have multiple reasons to expect the future to have capacity for a lot more suffering than the present, such as:
- more people, animals, sentient beings in general will exist, hence more beings who can suffer
- tech can unlock more stable governments and lock-in scenarios - suffering states last longer
The exact figures of "90" and "100" are made up, people speculate as far as 10^40 digital minds in the future.