Most people in EA are afraid of extinction risk. If the expected value of human is net positive, then we really should prevent human's extinction. There are a lot of uncertainties, such as:AI, the importance of s-risk, the evolution of human... I think human's future is like chaos . Can we estimate human's future is net-positive or net-negative objectively? or we can only rely on our moral intuition?
This blog post also comments on this question. I think it's got a lot of blind spots, but also a lot of good points! https://www.effectivealtruism.org/articles/the-expected-value-of-extinction-risk-reduction-is-positive#fn-fn-40