Hide table of contents

Should we worry that the risk of omnicide is increased by the growth of movements like EA and longtermism that draw attention to the extent and prevalence of suffering and the desirability of its reduction? 

One way of ending suffering would be to eliminate all life. If we convince more and more people of the problem of suffering, and the necessity to do something about it, do we also inadvertently increase the likelihood that some people will conclude that to end suffering we must end the world? With technological advances, it is possible that a very small number of actors would need to be convinced that this is a good idea for it to become a real risk over time.

2

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

This is a good question, but I worry you can make this argument about many ideas, and the cost of self-censorship is really not worth it. For example:

  • If we talk too much about how much animals are suffering, someone might conclude humans are evil
  • If we talk too much about superintelligence, someone might conclude AI is superior and deserves to outlive us
  • If we talk too much about the importance of the far future, a maximally evil supervillain could actually become more motivated to increase x-risk

As a semi-outsider working on the fringes of this community, my impression is that EA is way too concerned about what is good/bad to talk about. There are ideas, posts and words with negative EV in the short run, but I feel that's all outweighed by the virtue of vigorous debate and capacity for free thinking.

On a more serious note, I am philosophically concerned about the argument "the possibility of s-risks implies we should actually increase x-risk", and am actively working on this. Happy to talk more if it's of mutual interest.

Thank you for your reply. I would not wish to advocate for self-censorship but I would be interested in creating and spreading arguments against the efficacy of doomsday projects, which may help to avert them.

The solution to the problem of suffering cannot be to eliminate all life because lifeless evolution created life once and it could recreate it, and million years of pain would come along again before another intelligent species like ours re-appear with technical power and has a chance to resolve the problem of suffering by controlling that phenomenon through conscious rational efforts until the end of this universe.

Million years of "state of nature" type pain is strongly preferable to s-risks.

The doomsday end suffering project would then be to eliminate life and the conditions for the evolution of life throughout the universe.

2
RobertDaoust
3y
Your concern about doomsday projects is very welcome in this age of high existential risks. Suffering in particular plays a central role in that game. Religious fanatics, for instance, are waiting for the cessation of suffering through some kind of apocalypse. Many negative utilitarians or antinatalists, on another side, would like us to organize the end of the world in the coming years, a prospect that can only lead to absurd results. For the short term, doomsday end suffering projects can plan to eliminate life (or at least human life, because bacteria and other small creatures would be extremely hard to eliminate on this planet), but I doubt that they would want to have consideration for "the conditions for the evolution of life throughout the universe", be it only because they are completely unable to do anything about that, or because they are anyway not rational at all in their endeavor. So, there is a race between us and the doomsday mongers: I think that bringing a solution to suffering is our only chance to win in time.
More from dotsam
Curated and popular this week
Relevant opportunities