While longtermism is an interesting ethical principle, I believe the consequence of the extent of uncertainty involved with the impact of current decisions on future outcomes has not been fully explored. Specifically, while the expected value may seem reasonable, the magnitude of uncertainty is likely to dwarf it. I wrote a post on it and as far as I can tell, I have not seen a good argument addressing these issues.
https://medium.com/@venky.physics/the-fundamental-problem-with-longtermism-33c9cfbbe7a5
To be clear, I understand the argument of risk-reward tradeoff and how one is often irrationally risk-averse but I am not talking about that here.
One way to think of this is the following: if the impact of an intervention at present to influence long term future is characterized as a random variable X(t) , then, while the expectation value could be positive:
the standard deviation as a measure of uncertainty ,
could be so large that the coefficient of variation is very small:
Further if the probability of a large downside, is not negligible, where , then I don't think that the intervention is very effective.
Perhaps I have missed something here or there have been some good arguments against this perspective that I am not aware. I'd happy to hear about these.
I see what you mean, and again I have some sympathy for the argument that it's very difficult to be confident about a given probability distribution in terms of both positive and negative consequences. However, to summarize my concerns here, I still think that even if there is a large amount of uncertainty, there is typically still reason to think that some things will have a positive expected value: preventing a given event (e.g., a global nuclear war) might have a ~0.001% of making existence worse in the long-term (possibility A), but it seems fair to estimate that preventing the same event also has a ~0.1% chance of producing an equal amount of long-term net benefit (B). Both estimates can be highly uncertain, but there doesn't seem to be a good reason to expect that (A) is more likely than (B).
My concern thus far has been that it seems like your argument is saying "(A) and (B) are both really hard to estimate, and they're both really low likelihood—but neither is negligible. Thus, we can't really know whether our interventions are helping. (With the implicit conclusion being: thus, we should be more skeptical about attempts to improve the long-term future)" (If that isn't your argument, feel free to clarify!). In contrast, my point is "Sometimes we can't know the probability distribution of (A) vs. (B), but sometimes we can do better-than-nothing estimates, and for some things (e.g., some aspects of X-risk reduction) it seems reasonable to try."