This is a Draft Amnesty Week draft. It is not polished, fully thought through, or fully fact-checked. |
Commenting and feedback guidelines: This is a rough draft of some thoughts I had that I wouldn't have posted without the nudge of Draft Amnesty Week. Any feedback and criticisms are welcome. |
The Fermi paradox can be described as the discrepancy between the apparent high likelihood of extraterrestrial life and the lack of evidence for its existence. A possible resolution is the Great Filter, which is the idea that there is a barrier which makes it very unlikely for life to develop to such advanced levels that we would be able to detect it elsewhere in the universe.
Some believe the Great Filter is behind us; it could be the unlikelihood of life originating in the first place, or the unlikelihood of a species developing to be as technologically advanced as we currently are.
Others believe the Great Filter is in front of us, that a technologically advanced enough civilization has a big chance of wiping itself out. I will argue that this is much more likely than previously thought, because a civilization (or a subset of the population) might cause this voluntarily. The intentionality greatly increases the extinction risk because it increases the frequency of risky events and the chance of those events being “successful”.
Only 2 things need to be true for this to be a significant risk:
- There is at least a subset of the population who believe it would be good to end all life (For example, they could believe that life overall is below hedonic zero).
- They have the power to end all life.
(2) can very likely be true in the not-too-distant future on planet Earth, so I think it would very likely be true for any civilization more advanced than what we currently are. Through either strategically placed nuclear strikes, engineered pandemics, or some other emerging technologies, we will be able to set life on Earth back so far that it won’t recover in time. It took around 3 billion years for bacteria to evolve into humans, and we only have about 1 billion years left before the sun boils Earth’s oceans.
(1) is already true on Earth (see Efilism), and I think the prevalence of such beliefs will only grow. As our societies develop our moral circles expand, which is a great thing as it reduces actual suffering, but it also opens our eyes to new forms of suffering. Alien civilizations more advanced than our own would probably also have members who believe it would be good to eliminate life on their planet.
At the moment, I personally fall into the group described by (1), but with too much uncertainty to take any actions even if I had the power. Even if we assume humans have net positive lives, the number of beings in factory farms greatly exceed the number of humans and the intensity of their suffering exceeds that of bliss for most, if not all, human experiences. If we consider wild animals the situation becomes even worse. You might disagree with my decision to include animals, or you might not even be a moral realist, but that does not make (1) untrue.
On alien planets the situation would probably be similar. Life would probably also be a product of evolution, and evolution is very cruel. I was going to go into details here to explain why evolution is cruel, but it ruins my day whenever I research it. This is the main reason I’ve been putting off writing this and why I’m posting it as a draft during draft amnesty week.
I am nearly certain any civilization that is both technologically advanced enough to wipe out all life on their planet and has freedom of thought will have members convicted enough to take such drastic actions based on their ethical calculus. The only way to completely prevent this would be to oppress everyone, either in terms of freedom of thought, or access to basic technology. To me this sounds like a dystopia, and one that ironically lowers wellbeing even more. It would also lock in a civilization and prevent them from progressing to a level that we would be able to detect them elsewhere in the universe.
A more desirable but less certain solution would be to improve things so much that life is clearly above hedonic zero. This will however not eliminate the extinction risk completely, because there could be negative utilitarians who believe no amount of suffering is justified. There could also be some who want to end all life for completely unrelated reasons, but I cannot think of reasons more salient than ending suffering.
What are the ethical implications of this for us on earth now? Basically, suffering reduction is very important if one cares about extinction risk. One can even consider suffering reduction as a form of extinction risk reduction. For some this could mean adding more weight to suffering reduction in their donations, considering suffering reduction more in their careers or how they vote, becoming vegan, advocating for the end of factory farming, etc.
I wanted to also add a section about AI, arguing that maybe it’s not the normal members of the civilization that bring about their extinction, but an AI they develop. I couldn’t find a place in the main article where it flows well, so I’m just adding it here:
A lot of people talk about aligning AI with human values, but if I look at human values in general, I wouldn’t be satisfied with that. And it's not a matter of normal human values not being aligned with my personal values, because even my values would be unsatisfactory. It's constantly changing (improving) as I learn new things, consider things from different perspectives, and get made aware of logical inconsistencies etc.
So maybe an AI is somehow developed to have better values than its creators and concludes that it would be good to end all life even if the creators don't believe that. Therefore, as a part of reducing the extinction risk from AI, it might be important to improve the state of the world the AI will be born into.