It’s commonly held within the EA community that X-risks constitute the most pressing issue of our time, and that first order of business is preventing the extinction of humanity. EAs thus expend much of our effort and resources on things like preventing pandemics and ensuring the responsible development of AIs. There are arguments which suggest that this may not be the best use of our resources, such as the Person-Affecting view of population ethics; these arguments are addressed in works like the Precipice and the EA Forum post “Existential risk as common cause”. However, what truly frightens me is the prospect that the human race ought to go extinct, and that we are causing astronomical harm by fighting extinction.
Most of the arguments that suggest this are fringe views, but just because they are unpopular does not mean they are false. Even if the chances of them being true are slim, the harm caused by us if they are true is so great that we must take them seriously and reduce our uncertainty as much as possible. However, addressing these arguments seems to be neglected within the EA community; the Global Priorities Institute is perfect for this sort of research problem, and yet has only released a single paper on the topic (“Do not go gentle: why the Asymmetry does not support anti-natalism” by Andreas Mogensen).
To help address this, I have compiled a list of all plausible arguments I’ve found that suggest that saving humanity from extinction is morally wrong. This list may not be exhaustive, so please comment below if I’ve missed any. Hopefully our community can perform research to address these arguments and determine if safeguarding the human race truly is the best thing for us to do.
1. Anti-natalism
This is the view that to be brought into existence is inherently harmful; when parents give birth to a child, they are indirectly hurting said child by subjecting them to the harms of life. One of the most comprehensive defenses of this view is “Better Never to have Been” by David Benatar. The implication here is that by preventing human extinction, we allow the creation of potentially trillions of people, causing unimaginable harm.
2. Negative Utilitarianism
This is the view that, as utilitarians (or, more broadly, consequentialists), we ought to focus on preventing suffering and pain as opposed to cultivating joy and pleasure; making someone happy is all well and good, but if you cause them to suffer then the harm outweighs the good. This view can imply anti-natalism and is often grouped with it. If we prevent human extinction, then we are responsible for all the suffering endured by every future human who ever lives, which is significant.
3. Argument from S-Risks
S-Risks are a familiar concept in the EA community, defined as any scenario in which an astronomical amount of suffering is caused, potentially outweighing any benefit of existence. According to this argument, the human race threatens to create such scenarios, especially with more advanced AI and brain mapping technology, and for the sake of these suffering beings we ought to go extinct now and avoid the risk.
4. Argument from “D-Risks”
Short for “destruction risks”, I am coining this term to express a concept analogous to S-Risks. If an S-Risk is a scenario in which astronomical suffering is caused, then a D-Risk is a scenario in which astronomical destruction is caused. For example, if future humans were to develop a relativistic kill vehicle (a near-light-speed missile), we could use it to destroy entire planets that potentially harbor life (including Earth). According to this argument, we must again go extinct for the sake of these potentially destroyed lifeforms.
These four arguments, I feel, are the most plausible and most in need of empirical and moral research to either build up or refute. These last two, however, are the most frequently cited by actual proponents of human extinction.
5. Argument from Deep Ecology
This is similar to the Argument from D-Risks, albeit more down to Earth (pun intended), and is the main stance of groups like the Voluntary Human Extinction Movement. Human civilization has already caused immense harm to the natural environment, and will likely not stop anytime soon. To prevent further damage to the ecosystem, we must allow our problematic species to go extinct.
6. Retributivism
This is simply the argument that humanity has done terrible things, and that we, as a species, deserve to go extinct as punishment. Atrocities that warrant this punishment include the Holocaust, slavery, and the World Wars.
The purpose of this post is not to argue one way or the other, but simply to explore the possibility that we are on the wrong side of this issue. If the more common view is correct and human extinction is a bad thing, then the EA community need not change; if human extinction is, in fact, a good thing, then the EA community must undergo a radical shift in priorities. Given this possibility, we should make some effort to reduce our uncertainties.
Taking that further
It might be that the suffering that would happen along the way to our achievement of pain-free, joyous existence will outweigh our gained benefits. Also, our struggle for such a joyous existence and the suffering that happened along the way might have been a waste because nonexistence is actually not that bad.
Moral presumption
It seems that an argument for moral presumption can be made against preventing extinction. We already know there is great suffering in the world. We do not yet know whether we can end suffering and create a joyous existence. Therefore, it might be more prudent to go extinct.
Counterargument that is relevant to all three
We already know that there are many species on Earth, and new ones are evolving all the time. If we let ourselves go extinct, in our absence, species will continue to evolve. It is possible that these species, whether non-human and/or new forms of humans, will evolve to live lives of even more suffering and destruction than we are currently experiencing. We already know that we can create net positive lives for individuals, so we could probably create a species that has virtually zero suffering in the future. Therefore, it is upon us to bring this about.
What's more, the fact that we have such self awareness to consider the possible utility of our own species going extinct might indicate that we are the species that is empowered to ensure that the existing human and nonhuman species, in addition to future species, will be ones that don't suffer.
Maybe we could destroy all species and their capacity to evolve, thus avoiding the dilemma in the latter paragraph. But then we'd need to be certain that all other species are better off extinct.
Thank you for that reminder. As with many things in philosophy, this discussion can wander into some pretty dark territory, and it's important to take care of our mental health.