Currently, I'm pursuing a bachelor degree in Biological Sciences in order to become a researcher in the area of biorisk, because I was confident that humanity would stop causing tremendous amounts of suffering upon other animals and would assume a net positive value in the future.
However, there was a nagging thought in the back of my head about the possibility that it would not do so, and I found this article suggesting that there is a real possibility that such horrible scenario might actually happen.
If there is indeed a very considerable chance that humanity will keep torturing animals at an ever growing scale, and thus keep having a negative net-value for an extremely large portion of its history, doesn't that mean that we should strive to make humanity more likely to go extinct, not less?
Thanks for the comment, Zach. I upvoted it.
I fully endorse expected total hedonistic utilitarianism[1], but this does not imply any reduction in extinction risk is way more valuable than a reduction in nearterm suffering. I guess you want to make this case by making a comparison like the following:
I do not think the above comparison makes sense because it relies on 2 different methodologies. The way they are constructed, the 2nd will always have an impact for life-saving interventions which is limited to the global population of around 10^10, so it is bound to result in a lower impact than the 1st even if it is describing the exact same intervention. Interventions which aim to decrease the probability of a given population loss[2] achieve this via saving lives, so one could weight lives saved at lower population sizes more heavily, but still estimate their cost-effectiveness in terms of lives saved per $. I tried this, and with my assumptions interventions to save lives in normal times look more cost-effective than ones which save lives in severe catastrophes.
Less theoretically, decreasing measurable (nearterm) suffering (e.g. as assessed in standard cost-benefit analyses with estimates in DALY/$) has been a great heuristic to improve the welfare of the beings whose welfare is being considered both nearterm and longterm[3]. So I think it makes sense to a priori expect interventions which very cost-effectively decrease measurable suffering to be great from a longtermist perspective too.
In principle, I am very happy to say that a 10^-100 chance of saving 10^100 lives is exactly as valuable as a 100 % chance of saving 1 life.
For example, decresing the probability of population dropping below 1 k for extinction, or dropping below 1 billion for global catastrophic risk.
Animal suffering has been increasing, but animals have been neglected. There are efforts to account for animals in cost-benefit analyses.