Question Mark

Suffering should not exist.

Topic Contributions

Comments

The case to abolish the biology of suffering as a longtermist action

Brian Tomasik's essay "Why I Don't Focus on the Hedonistic Imperative" is worth reading. Since biological life will almost certainly be phased out in the long run and be replaced with machine intelligence, AI safety probably has far more longtermist impact compared to biotech-related suffering reduction. Still, it could be argued that having a better understanding of valence and consciousness could make future AIs safer.

Arguments for Why Preventing Human Extinction is Wrong

An argument against advocating human extinction is that cosmic rescue missions might eventually be possible. If the future of posthuman civilization converges toward utilitarianism, and posthumanity becomes capable of expanding throughout and beyond the entire universe, it might be possible to intervene in far-flung regions of the multiverse and put an end to suffering there.

Arguments for Why Preventing Human Extinction is Wrong

5. Argument from Deep Ecology

    This is similar to the Argument from D-Risks, albeit more down to Earth (pun intended), and is the main stance of groups like the Voluntary Human Extinction Movement. Human civilization has already caused immense harm to the natural environment, and will likely not stop anytime soon. To prevent further damage to the ecosystem, we must allow our problematic species to go extinct.

This seems inconsistent with anti-natalism and negative utilitarianism. If we ought to focus on preventing suffering, why shouldn't anti-natalism also apply to nature? It could be argued that reducing populations of wild animals is a good thing, since it would reduce the amount of suffering in nature, following the same line of reasoning as anti-natalism applied to humans.

If EA is no longer funding constrained, why should *I* give?

Even if the Symmetry Theory of Valence turns out to be completely wrong, that doesn't mean that QRI will fail to gain any useful insight into the inner mechanics of consciousness. Andrew Zuckerman sent me this comment previously on QRI's pathway to impact, in response to Nuño Sempere's criticisms of QRI. The expected value of QRI's research may therefore have a very high degree of variance. It's possible that their research will amount to almost nothing, but it's also possible that their research could turn out to have a large impact. As far as I know, there aren't any other EA-aligned organizations that are doing the sort of consciousness research that QRI is doing.

Is Our Universe A Newcomb’s Paradox Simulation?

The way I presented the problem also fails to account for the fact that it seems quite possible  there is a strong apocalyptic fermi filter that will destroy humanity, as this could account for why it seems we are so early in the cosmic history (cosmic history is unavoidably about to end). This should skew us more toward hedonism.

Anatoly Karlin's Katechon Hypothesis is one Fermi Paradox hypothesis that is similar to what you are describing. The basic idea is that if we live in a simulation, the simulation may have computational limits. Once advanced civilizations use too much computational power or outlive their usefulness, they are deleted from the simulation.

Is Our Universe A Newcomb’s Paradox Simulation?

If we choose longtermism, then we are almost definitely in a simulation, because that means other people like us would have also chosen longtermism, and then would create countless simulations of beings in special situations like ourselves. This seems exceedingly more likely than that we just happened to be at the crux of the entire universe by sheer dumb luck.

 Andrés Gómez Emilsson discusses this sort of thing in this video. The fact that our position in history may be uniquely positioned to influence the far future may be strong evidence that we live in a simulation.

Robin Hanson wrote about the ethical and strategic implications of living in a simulation in his article "How to Live in a Simulation".  According to Hanson, living in a simulation may imply that you should care less about others, live more for today, make your world look more likely to become rich, expect to and try more to participate in pivotal events, be more entertaining and praiseworthy, and keep the famous people around you happier and more interested in you.

If some form of utilitarianism turns out to be the objectively correct system of morality, and post-singularity civilizations converge toward utilitarianism and paradise engineering is tractable, this may be evidence against the simulation hypothesis. Magnus Vinding argues that simulated realities would likely be utopias, and since our reality is not a utopia, the simulation hypothesis is almost certainly false.  Thus, if we do live in a simulation, this may imply that either post-singularity civilizations tend to not be utilitarians or that paradise engineering is extremely difficult.

Assuming we do live in a simulation, Alexey Turchin created this map of the different types of simulations we may be living in. Scientific experiments, AI confinement, and education of high-level beings are possible reasons why the simulation may exist in the first place.

If EA is no longer funding constrained, why should *I* give?

Even though there are some EA-aligned organizations that have plenty of funding, not all EA organizations are that well funded. You should consider donating to the causes within EA that are the most neglected, such as cause prioritization research. The Center for Reducing Suffering, for example, has only received £82,864.99 GBP in total funding as of late 2021. The Qualia Research Institute is another EA-aligned organization that is funding-constrained, and believes it could put significantly more funding to good use.

AI Alignment YouTube Playlists

This isn't specifically AI alignment-related, but I found this playlist on defending utilitarian ethics. It discusses things like utility monsters and the torture vs. dust specks thought experiment, and is still somewhat relevant to effective altruism.

Why do you care?

My concern for reducing S-risks is based largely on self-interest. There was this LessWrong post on the implications of worse than death scenarios. As long as there is a >0% chance of eternal oblivion being false and there being a risk of experiencing something resembling eternal hell, it seems rational to try to avert this risk, simply because of its extreme disutility. If Open Individualism turns out to be the correct theory of personal identity, there is a convergence between self-interest and altruism, because I am everyone.

The dilemma is that it does not seem possible to continue living as normal when considering the prevention of worse than death scenarios. If it is agreed that anything should be done to prevent them then Pascal's Mugging seems inevitable. Suicide speaks for itself, and even the other two options, if taken seriously, would change your life. What I mean by this is that it would seem rational to completely devote your life to these causes. It would be rational to do anything to obtain money to donate to AI safety for example, and you would be obliged to sleep for exactly nine hours a day to improve your mental condition, increasing the probability that you will find a way to prevent the scenarios. I would be interested in hearing your thoughts on this dilemma and if you think there are better ways of reducing the probability.

Load More