Most people in EA are afraid of extinction risk. If the expected value of human is net positive, then we really should prevent human's extinction. There are a lot of uncertainties, such as:AI, the importance of s-risk, the evolution of human... I think human's future is like chaos . Can we estimate human's future is net-positive or net-negative objectively? or we can only rely on our moral intuition?
In addition to Fin's considerations and the excellent post by Jacy Anthis, I find Michael Dickens' analysis to be useful and instructive. What We Owe The Future also contains a discussion of these issues.