I am disturbed at the absolutely horrific things that some humans go through. The very worst things I can think of include child sex trafficking and the fact that young children are sometimes raped and abused by family members, including their parents. I have read stories about the torture of children by psychopaths. The suffering these children must go through must be unimaginable to those that have not experienced it.

I was thinking about sharing specific details of the most disturbing acts I have read about but decided that may be inappropriate, even though I think sharing specific details of atrocities may strengthen my argument. If anyone’s interested, read the Wikipedia page of serial killer Albert Fish (not for the faint of heart).

My point is that preventing human extinction inevitably subjects many, many more children to these atrocities. This doesn’t sit at all well with me and I don’t think it should sit well with any reasonable person.

I suspect the main comeback to this is that as humanity improves we will eventually see a day where these atrocities don’t occur. I think this is just way too optimistic. Even if this is achieved it could be millenia before we completely eradicate all abuse. I doubt that millions more abused children is a price worth paying.

I’m not saying we should encourage extinction, I’m saying we should cease efforts to prevent it. We should redirect these resources to making the world a better place, not prolonging its existence.

11

0
0

Reactions

0
0
Comments6
Sorted by Click to highlight new comments since:

On the other hand, there are also arguments for why one should work to prevent extinction even if one did have the kind of suffering-focused view that you're arguing for; see e.g. this article. To briefly summarize some of its points:

If humanity doesn't go extinct, then it will eventually colonize space; if we don't colonize space, it may eventually be colonized by an alien species with even more cruelty than us.

Whether alternative civilizations would be more or less compassionate or cooperative than humans, we can only guess. We may however assume that our reflected preferences depend on some aspects of being human, such as human culture or the biological structure of the human brain[48]. Thus, our reflected preferences likely overlap more with a (post-)human civilization than alternative civilizations. As future agents will have powerful tools to shape the world according to their preferences, we should prefer (post-)human space colonization over space colonization by an alternative civilization.

A specific extinction risk is the creation of unaligned AI, which might first destroy humanity and then go on to colonize space; if it lacked empathy, it might create a civilization where none of the agents cared about the suffering of others, causing vastly more suffering to exist.

Space colonization by an AI might include (among other things of value/disvalue to us) the creation of many digital minds for instrumental purposes. If the AI is only driven by values orthogonal to ours, it would likely not care about the welfare of those digital minds. Whether we should expect space colonization by a human-made, misaligned AI to be morally worse than space colonization by future agents with (post-)human values has been discussed extensively elsewhere. Briefly, nearly all moral views would most likely rather have human value-inspired space colonization than space colonization by AI with arbitrary values, giving extra reason to work on AI alignment especially for future pessimists.

Trying to prevent extinction also helps avoid global catastrophic risks (GCRs); GCRs could set social progress back, causing much more violence and other kinds of suffering than we have today.

Global catastrophe here refers to a scenario of hundreds of millions of human deaths and resulting societal collapse. Many potential causes of human extinction, like a large scale epidemic, nuclear war, or runaway climate change, are far more likely to lead to a global catastrophe than to complete extinction. Thus, many efforts to reduce the risk of human extinction also reduce global catastrophic risk. In the following, we argue that this effect adds substantially to the EV of efforts to reduce extinction risk, even from the very-long term perspective of this article. This doesn’t hold for efforts to reduce risks that, like risks from misaligned AGI, are more likely to lead to complete extinction than to a global catastrophe. [...]
Can we expect the “new” value system emerging after a global catastrophe to be robustly worse than our current value system? While this issue is debated[60], Nick Beckstead gives a strand of arguments suggesting the “new” values would in expectation be worse. Compared to the rest of human history, we currently seem to be on a unusually promising trajectory of social progress. What exactly would happen if this period was interrupted by a global catastrophe is a difficult question, and any answer will involve many judgements calls about the contingency and convergence of human values. However, as we hardly understand the driving factors behind the current period of social progress, we cannot be confident it would recommence if interrupted by a global catastrophe. Thus, if one sees the current trajectory as broadly positive, one should expect this value to be partially lost if a global catastrophe occurs.

Efforts to reduce extinction risk often promote coordination, peace and stability, which can be useful for reducing the kinds of atrocities that you're talking about.

Taken together, efforts to reduce extinction risk also promote a more coordinated, peaceful and stable global society. Future agents in such a society will probably make wiser and more careful decisions, reducing the risk of unexpected negative trajectory changes in general. Safe development of AI will specifically depend on these factors. Therefore, efforts to reduce extinction risk may also steer the world away from some of the worst non-extinction outcomes, which likely involve war, violence and arms races.

My rough answer to this is: If someone wants to die (after thinking about it for a long time and having time to reflect on it), let them die. If they want to live, help them do that. The vast majority of people want to continue living. I don't see how the atrocities that are experienced by humans outweigh the benefits, given that the vast majority of humans seem to have a pretty decent will to live.

(This does not hold for animals, and I think the strongest arguments for antinatalism and promoting extinction come from considering non-human suffering, but that seems different from the case you are making)

My rough answer to this is: If someone wants to die (after thinking about it for a long time and having time to reflect on it), let them die.

Some people don't have the choice to die, because they're prevented from it, like victims of abuse/torture or certain freak accidents.

I don't see how the atrocities that are experienced by humans outweigh the benefits, given that the vast majority of humans seem to have a pretty decent will to live.

I think this is a problem with the idea of "outweigh". Utilitarian interpersonal tradeoffs can be extremely cruel and unfair. If you think the happiness can aggregate to outweigh the worst instances of suffering:

1. How many additional happy people would need to be born to justify subjecting a child to a lifetime of abuse and torture?

2. How many extra years of happy life for yourself would you need to justify subjecting a child to a lifetime of abuse and torture?

The framings might invoke very different immediate reactions (2 seems much more accusatory because the person benefitting from another's abuse and torture is the one making the decision to subject them to it), but for someone just aggregating by summation, like a classical utilitarian, they're basically the same.

I think it's put pretty well here, too:

There’s ongoing sickening cruelty: violent child pornography, chickens are boiled alive, and so on. We should help these victims and prevent such suffering, rather than focus on ensuring that many individuals come into existence in the future. When spending resources on increasing the number of beings instead of preventing extreme suffering, one is essentially saying to the victims: “I could have helped you, but I didn’t, because I think it’s more important that individuals are brought into existence. Sorry.”

Counterpoint (for purposes of getting it into the discussion; I'm undecided about antinatalism myself): that argument only applied to people who are already alive, and thus not to most of the people who would be affected by the decision whether to extend the human species or not (I.e. those who don't yet exist). David Benatar argues (podcast, book) that while, as you point out, many human lives may well be worth continuing, those very same lives (he thinks all lives, but that's more than I need to make this argument) may nevertheless not have been worth starting. If this is the case, then some or all of the lives that would come into existence by preventing extinction may also not be worth starting.

Do you have a short summary of why he thinks that someone answering the question of "would you have preferred to die right after child birth?" with "No?" is not strong evidence that they should have been born? Seems like the same thing to me. I surely prefer to exist and would be pretty sad about a world in which I wasn't born (in that I would be willing to endure significant additional suffering in order to cause a world in which I was born).

Do you have a short summary of why he thinks that someone answering the question of "would you have preferred to die right after child birth?" with "No?" is not strong evidence that they should have been born?

I don't know what Benatar's response to this is, but - consider this comment by Eliezer in a discussion of the Repugnant Conclusion:

“Barely worth living” can mean that, if you’re already alive and don’t want to die, your life is almost but not quite horrible enough that you would rather commit suicide than endure. But if you’re told that somebody like this exists, it is sad news that you want to hear as little as possible. You may not want to kill them, but you also wouldn’t have that child if you were told that was what your child’s life would be like.

As a more extreme version, suppose that we could create arbitrary minds, and chose to create one which, for its entire existence, experienced immense suffering which it wanted to stop. Say that it experienced the equivalent of being burned with a hot iron, for every second of its existence, and never got used to it. Yet, when asked whether it wanted to die, or would have preferred to die right after it was born, we'd design it in such a way that it would consider death even worse and respond "no". Yet it seems obvious to me that it outputting this response is not a compelling reason to create such a mind.

If people already exist, then there are lots of strong reasons about respecting people's autonomy etc. for why we should respect their desire to continue existing. But if we're making the decision about what kinds of minds should come to existence, those reasons don't seem to be particularly compelling. Especially not since we can construct situations in which we could create a mind that preferred to exist, but where it nonetheless seems immoral to create it.

You can of course reasonably argue that whether a mind should exist, depends on whether they would want to exist and some additional criteria about e.g. how happy they would be. Then if we really could create arbitrary minds, then we might as well (and should) create ones that were happy and preferred to exist, as opposed to ones which were unhappy and preferred to exist. But in that case we've already abandoned the simplicity of just basing our judgment on asking whether they're happy with having survived to their current age.

I surely prefer to exist and would be pretty sad about a world in which I wasn't born (in that I would be willing to endure significant additional suffering in order to cause a world in which I was born).

This doesn't seem coherent to me; once you exist, you can certainly prefer to continue existing, but I don't think it makes sense to say "if I didn't exist, I would prefer to exist". If we've assumed that you don't exist, then how can you have preferences about existing?

If I ask myself the question, "do I prefer a world where I hadn't been born versus a world where I had been born", and imagine that my existence would actually hinge on my answer, then that means that I will in effect die if I answer "I prefer not having been born". So then the question that I'm actually answering is "would I prefer to instantly commit a painless suicide which also reverses the effects of me having come into existence". So that's smuggling in a fair amount of "do I prefer to continue existing, given that I already exist". And that seems to me unavoidable - the only way we can get a mind to tell us whether or not it prefers to exist, is by instantiating it, and then it will answer from a point of view where it actually exists.

I feel like this makes the answer to the question "if a person doesn't exist, would they prefer to exist" either "undefined" or "no" ("no" as in "they lack an active desire to exist", though of course they also lack an active desire to not-exist). Which is probably for the better, given that there exist all kinds of possible minds that would probably be immoral to instantiate, even though once instantiated they'd prefer to exist.

Curated and popular this week
Relevant opportunities