A few weeks ago I was protesting outside a pig slaughterhouse. I thought about GPT-3 a bit. I am afraid of a bad singularity, but the bad singularity has already come for the pigs with the explosion of human intelligence, and as a result, everyone they love is definitely going to die horribly. The idea that technology might lead to hell on earth is not a theoretical problem. It's happened before. I wonder how well a future singularity is going to go for people like me.

Many people are rightly concerned about technical AI alignment. Successful technical alignment would mean that each AI system is permanently aligned with the interest of some subset of humans. Is this sufficient to be confident of a good future? There are still serious risks even if the dominant coalition of AIs is aligned with something like the collective opinion of humanity (this would include an aligned singleton).

Here are some reasons why the future might be full of astronomical suffering:

Economic productivity: Suffering might be instrumental in achieving high economic output. Animal suffering in factory farms is a case in point: it just so happens that the most economically efficient way to satisfy the demand for cheap meat involves a lot of suffering. This is not currently an s-risk because it’s not astronomical in scope, but it’s possible that future technology will enable similar structures on a much larger scale. For instance, the fact that evolution uses pain suggests that learning might be more efficient if negative reward signals are also used, and we might consider sufficiently advanced and complex reinforcement learners to be capable of suffering.
Information gain: Experiments on humans or other sentient creatures might be useful for scientific purposes (like animal testing), while causing harm to those experimented on. Again, future technology may enable such practices on a much larger scale as it may become possible to run a large number of simulations of artificial minds (or ems) capable of suffering.
Entertainment: Many humans enjoy forms of violent entertainment. There are countless historical examples (gladiator fights, public executions and torture, hunting, and much more). While nowadays such entertainment is often fictional (e.g. in video games or movies), some real-world instances still exist (e.g. torture and execution videos, illegal animal fights, hunting). It is conceivable that complex simulations will be used for entertainment purposes in the future, which could cause serious suffering if these simulations contain artificially sentient beings.

Every detail you add to your prediction makes it less likely, but it seems like there are many ways things could wrong. If we think rapid technological change is plausible, then we should be paranoid about causing or tolerating tons of suffering. Humanity's current behavior suggests we are very far from being sufficiently paranoid about suffering.

As far as I can reason, the extent of animal suffering is the most important injustice of our time, but it isn't obvious to everyone. If a transformative superintelligent AI is in our near-to-medium future, it seems rather urgent to shift the distribution of opinion on the importance of suffering. It would be ideal if you could influence the opinions of the people most likely to control transformative AI.  There are many plausible options including direct activism, gaining political influence, or doing fundamental research. It is important to keep replaceability in mind when deciding how one can best contribute. It is worth noting many actions are still valuable even if transformative AI is far away.

Our treatment of animals, and to a lesser degree other humans, is extremely troubling evidence. It suggests that low power agents will be mistreated if there are economic or ideological reasons to support their mistreatment. The near-to-medium future might have dramatically more inequality and potential for mistreatment.

It is unclear how much time is left until a transformative AI takes off. But it is worth thinking about how much values can be changed and which institutions are values aligned with a future free of astronomical suffering. I also think it is better to start taking some sort of action now instead of just planning. You can always change your plans later as long as you avoid doing anything counter-productive.

Notes:

  1. In general, I think issues of severe suffering are more important than issues of distributing gains. The gains from AI might be very concentrated - Horses and Chimps did not gain much from the rise of humanity. There is a plausibly high-value political project of ensuring the gains from AI are somewhat evenly distributed. Some organizations like Open AI already support this goal.
  2. Plausibly you can focus on getting AI to learn human values and extrapolate them to something like our 'Coherent Extrapolated Volition'. Related MIRI paper. In addition to the CEV being constructible, it seems like we need to assume at least one of two things: either the CEV of most subsets of humanity matches the CEV of humanity as a whole, or we have to assume the people controlling AI will altruistically choose to encode humanity's CEV instead of their own.
  3. There are plausible functional decision-theoretic reasons to selfishly want to be in the coalition that 'robustly cares about the suffering of less powerful agents'.
  4. Some people rate especially reviled prisoners as less deserving of concern than plants. The distressing evidence is not limited to our treatment of animals.
A 2016 study that asked US participants to rate 30 different entities by the “moral standing” that they deserved found that “villains” (murderers, terrorists, and child molesters) were deemed by participants to deserve less moral standing than “low-sentience animals” (chickens, fish, and bees), with mean scores of 1.77 and 2.64 respectively on a scale from 0 to 9. Indeed, they were deemed to deserve less moral standing than non-sentient “environmental targets” (3.53) or plants (2.52).

51

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities