Great piece. I really connected with the part about the vastness of the possibility of conscious experience.
That said, I’m inclined to think that Utopia, however weird, would also be, in a certain sense, recognizable — that if we really understood and experienced it, we would see in it the same thing that made us sit bolt upright, long ago, when we first touched love, joy, beauty; that we would feel, in front of the bonfire, the heat of the ember from which it was lit. There would be, I think, a kind of remembering. As Lewis puts it: “The gods are strange to mortal eyes, and yet they are not strange.” Utopia would be weird and alien and incomprehensible, yes; but it would still, I think, be our Utopia; still the Utopia that gives the fullest available expression to what we would actually seek, if we really understood.
It sounds a little bit like you're saying that utopia would be recognisable to modern day humans. If you are saying that, i'm not sure I would agree. Can a great ape have a revelatory experience that a human can have when taking in a piece of art? There exists art that can create the relevant experience, but I highly doubt if you showed every piece of art to any great ape that it would have such an experience. Therefore how can we expect the experiences available in utopia to be recognisable to a modern day human?
Yeah the strong longtermism paper elucidates this argument. I also provide a short sketch of the argument here. At its core is the expected vastness of the future that allows longtermism to beat other areas.
Where is this comparison? I feel like i'm repeating myself here but in order to argue that focus on existential risk is one of the best things we can do for the far future, it needs to be compared with the effect of other focus areas on the far future. But if we are not in a position to predict how cause areas will effect the far future (including focus on x-risk), then how can we make the comparison to say that focussing on existential risk is better than any of the other causes.
Put another way, if focus on existential risk is better for the far future than medical research, we need to show that focus on medical research is worse for the far future than focus on existential risk. Since we aren't in a position to predict the impact medical research will have on the far future, we aren't in a position to make such a comparison. Otherwise the argument just collapses to, existential risk is probably good for the far future, so let's focus on it.
To be clear, if every focus area received equal priority including existential risk, I don't see how we can justify a greater investment into existential risk by arguing that it will have the highest expected value for the far future.
However all focus areas don't receive equal priority. The analysis you linked by Duffy shows that focusing on existential risk, at least in the short term, is cost-competitive with some of the more effective focus like the AMF and perhaps in virtue of this it is more neglected than other areas. Therefore, it appears to be a worthy cause area. My concern is with how reference to the far future is being used as a justification for this cause.
It isn't a clear winner but neither were any of the other options and it was cost competitive.
What most longtermist analysis does is argue that if you consider the far future, longtermism then becomes the clear winner (e.g. here).
In this thread Toby Ord has said that he and most longtermists don't support 'strong determinism'. Although he hasn't elucidated what the mainstream view of longtermism is.
We can make a general argument it will be good in expectation because it will help us deal with future disease which will help us reduce future suffering.
With longtermist interventions, the argument is the far future effects are significantly positive and large in expectation.
If all the argument amounts to is that it will be good in expectation, well we can say that about a lot of cause areas. What we need is an argument for why it would be good in expectation, compared to all these other cause areas.
The simplest explanation is that future wellbeing matters, so reducing extinction risk seems good because we increase the probability of there being some welfare in the future rather than none.
Future well being does matter but focusing on existential risk doesn't lead to greater future well-being necessarily. It leads to humans being alive. If the future is filled with human suffering, then focus on existential risk could be one of the worst focus areas.
According to the authors of the linked article, longtermists have not convincingly shown that taking the far future in account impacts decision-making in practice. Their claim is that the burden of proof here lies for the longtermist. If the far future is important for moral decision-making then this claim needs to be justified. A surface level justification that people in the far future would want to be alive, is equally justified by reference to the near future.
You linked a quantitative attempt at answering the question of whether focus on existential risk requires priority if we consider <200 years, and the answer appears to be in the affirmative (depending on weightings). Is there a corresponding attempt at making this case using the far future as a reference point?
In order to provide a justification for preventative x-risk policies with reference to their impact on the far future we would need to compare it with the impact of other focus areas and how they would influence the far future. That is in part where the 'We Are Not in a Position to Predict the Best Actions for the Far Future' claim fits in because how are we supposed to do an analysis of the influence of any intervention (such as medical research, but including x-risk interventions) on people living millions of years into the future. It's possible that if we did have that kind of predictive power, many other focus areas may turn out to be orders of magnitude more important than focus on existential risks.
Thanks for linking to that research by Laura Duffy, that's really interesting. It would have been relevant for the authors of the current article as well.
According to their analysis, spending on conservative existential risk interventions are cost competitive (within an order of magnitude) to spending on AMF. Further, compared to plausible less conservative existential risk interventions, AMF is "probably" an order of magnitude less cost-effective. Under Rethink Priorities' estimates for welfare ranges, for cage-free campaigns and the hypothetical shrimp welfare intervention, existential risk interventions are either cost competitive, or an order of magnitude less cost-effective.
I think that actually gives some reasonable weight to the idea that existential risk can be justified without reference to the far future. Duffy used a timeline of <200 years and even then a case can be made that interventions focussing on existential risk should be prioritised. At the very least it adds level of uncertainty about the relevance of the far future in moral-decision making.
Perhaps that could have been worded better in my summary. It is not that we cannot predict what could boost medical research in the far future. Rather it is that we cannot predict the effect that medical research will have on the far future. For example the magnitude of the effect may be so incredibly large that it might out prioritize traditional existential risks, either because it leads to a good future, or perhaps to a bad future. Or perhaps further investments in medical research will not lead to any significant gains in the things we care about. Either way we don't have a means of predicting how our current actions will influence the far future.
With regards to value, being alive, having the ability to do what we want, and minimizing suffering, might very well be things that people in the far future value, but they are also things that we currently value now. On the authors account therefore, these values can guide our moral decision making by virtue of being things we value now and into the near future and referencing that they will also be valued by the far future is an irrelevant extra piece of information, i.e it does no additional work in guiding our moral decision making.
What is the more widely endorsed view of longtermists?
I largely agree with your "distant countries" objection. Just because something is practically implausible does not make it morally wrong, or not worthy of attention. I also think it's not necessarily true that implementing longtermism requires radical changes to human psychology or social institutions. We need not necessarily convince every human on the planet to care about the lives of future generations, only those who might have a meaningful impact (which could be a small number).
Nevertheless, I think the other three objections that you don't mention provide some interesting and potentially serious challenges for longtermism, perhaps for weaker forms as well.
In the article the authors are somewhat ambiguous about the meaning of 'near future'. They do at one point refer to the present and a few generations, as their potential time stamp. But your point raises an interesting question for the longtermists: How long does the future need to be in order for future people to have moral weight?
Although we might want to qualify it slightly in that the element of interest is not necessarily the number of years into the future but rather how many people (or beings) will be in the future. The question then becomes: How many people need to be alive in the future in order for their lives to have moral weight?
If we knew a black hole would swallow humanity in 200 years, on some estimates, there could still be ~15 billion human lives to come. If we knew that the future held only 15 more billion lives, would that justify not focusing on existential risks?
So i order to justify longtermism in particular, you have to point out proposed policies that are a lot less sensible seeming, and rely on a lot less certainty.
If you're referring to the first point I would reword this to:
In order to justify longtermism in particular, you have to point out proposed policies that can't be justified by drawing on the near future.
The washing out hypothesis is a different concern to what we are talking about here. The idea I have been discussing here is not that an intervention might become less significant as time goes on. An intervention could be extremely significant for the far future, or not significant at all. However, our ability to predict the impact of that intervention on the far future is outside our purview.
From the article:
Or perhaps the difficulty lies in the high number of causal possibilities the further we reach into the future.
In the article they compare the impact of an intervention (malaria bed nets) on the near future with the impact of an intervention (reducing x-risk from asteroids, global pandemics, AI risk) on the far future. As I said earlier, not an adequate comparison.
If we compare the positive impact of an intervention on quadrillions of people to a positive impact of an intervention on only billions of people, should we be surprised that the intervention that considers the impact on more people has a greater effect? Put another way, should we be surprised the bed net intervention has a smaller impact when we reduce the time horizon of its impact to the near future?
To this you might say, well interventions focused on malaria might have this 'washing out' effect. But so might interventions for reducing existential risk. For example, the intervention discussed in the paper to reduce extinction-level pandemics is to spend money on strengthening the healthcare system. Something that could easily be subject to the 'washing out' effect.
Nevertheless, the bed net intervention is only one intervention, and there are other interventions that could have more plausible effects on the far future which would be more adequate comparisons (if such comparisons were feasible in the first place), for example, medical research.
If extinction and non-extinction are "attractor states", from what I gather, a state that is expected to last an extremely long time, what exactly isn't an attractor state?
Let me translate that sentence: Focusing on existential risk is more beneficial for the far future than other cause areas because it increases the probability of humans being alive for an extremely long time. If it's more beneficial, we need the relevant comparison, as per above, the relevant comparison is lacking.