I am an independent researcher interested in AI alignment =)
One key issue is that we very likely do not know enough about what utopia mean or how to achieve it. We also don't know enough about the current expected value of the long run future (even conditional on survival). And we likely won't make much progress on these difficult questions before AI or other X risks. Reducing P(extinction) seems to be a necessary condition to be in a position to use safe AI that makes progress on important fields that we need to figure out to understand what Utopia mean and how to increase P(Utopia) (and avoid downside risks such as S-risks in the process).
Example of fields that could be particularly important to point our safe and aligned AI towards :
-Moral philosophy (and in particular to check whether total utilitarianism is correct or if we can update to better alternatives)
-Governance mechanisms and Economics to implement our extrapolated ideal moral system in the world
It might be preferable to focus on reducing P(doom) AND reducing the risks of a premature irreversible race to the universe to give us ample time to use our safe and aligned AI to solve others important problems and make substantial progress in natural, social sciences and philosophy. (a "long reflexion" with AI that does not need to be long on astronomical scales)
Additional arguments to explore:
Existential risks, and AI considerations aside:
Ageing generates an important amount of suffering (old bodies in particular tends to be painful for a while) and might be one of the dominant burden on healthcare systems, I'm wondering for ex how ageing compares with standard global health and development issues at least maybe it could plausibly compare or even beat them in terms of scale (would like to see a cost benefit analysis of that).