Hide table of contents

The Center on Long-Term Risk recently posted an updated introduction to s-risks on our website.

Suffering risks, or s-risks, are “risks of events that bring about suffering in cosmically significant amounts” (Althaus and Gloor 2016). This article will discuss why the reduction of s-risks could be a candidate for a top priority among altruistic causes aimed at influencing the long-term future. The number of sentient beings in the future might be astronomical, and certain cultural, evolutionary, and technological forces could cause many of these beings to have lives dominated by severe suffering. S-risks might result from unintended consequences of pursuing large-scale goals (“incidental s-risks”), intentional harm by intelligent beings with influence over many resources (agential), or processes that occur without agents’ intervention (natural) (Baumann 2018a).

Efforts to reduce s-risks generally consist of researching factors that likely exacerbate these three mechanisms (especially emerging technologies, social institutions, and values), applying insights from this research (e.g., recommending principles for the safe design of artificial intelligence), and building the capacity of future people to prevent s-risks.

Summary:

  • Due to coming advances in space settlement and technology—which on Earth has historically enabled massive increases in suffering, despite plausibly increasing the average human quality of life—it is possible that there are risks of suffering on a scale that is significant relative to the scale of the long-term future as a whole.
  • Although it is very difficult to predict the effects of interventions on the long-term future, efforts to reduce s-risks might be sufficiently predictable and stable by taking one of two approaches: (1) identifying factors in the near future that could lock in states leading to massive suffering, or (2) putting future generations in a better position to make use of information they will have about impending s-risks.
    • One important risk factor for such lock-in is the deployment of powerful artificially intelligent (AI) agents, which appears technically feasible in the next few decades and could lead to a future shaped by goals that permit causing astronomical suffering.
    • Solving the problem of alignment of AI systems with human intent does not appear to be sufficient or necessary to prevent s-risks from AI.
  • Preventing intense suffering is the top priority of several plausible moral views, and given that it is a sufficiently high priority of a wide variety of other views as well, accounting for moral uncertainty suggests that s-risk reduction is an especially robust altruistic cause.
  • Reducing s-risks by a significant amount might be generally more solvable than other long-term priorities, though this is unclear. On one hand, the worst s-risks seem much less likely than, e.g., risks of human extinction. This limits the value of s-risk reduction according to perspectives on which the expected moral value of posthuman civilization is highly positive. That said, marginal efforts at s-risk reduction may be especially valuable because s-risks are currently very neglected, and avoiding worst cases may be easier than fully solving AI alignment or ensuring a utopian future.
  • Focusing on preventing worst-case outcomes of suffering appears more promising than moving typical futures towards those with no suffering at all, because it is plausible that some futures could be far worse than typical.
  • Incidental s-risks could result from the exploitation of future minds for large-scale computations needed for an interstellar civilization, detailed simulations of evolution, or spreading wildlife throughout the universe without considering the suffering of the organisms involved.
  • Agential s-risks could result from malevolent or retributive agents gaining control over powerful technology, or from AIs that deliberately create suffering.
  • Natural s-risks could result from future civilizations not prioritizing reducing unnecessary suffering, for reasons similar to the persistence of wild animal suffering on Earth.
  • Targeted approaches to s-risk reduction might be preferable to more broad alternatives, as far as they avoid unintentionally influencing many variables in the future, which could backfire. The most robust of these approaches include: research into AI designs that decrease their tendencies towards destructive conflicts or reduce near-miss risks; some forms of decision theory research; promotion of coordination between and security within AI labs; and research modeling s-risk-relevant properties of future civilizations.
  • Broad approaches to s-risk reduction have the advantage of potentially improving a wider range of possible futures than targeted ones. Examples of these include: advocating moral norms against taking risks of large-scale suffering; promoting more stable political institutions that are conducive to compromise; and building knowledge that could be used by future actors who are in positions to prevent s-risks.
Comments3
Sorted by Click to highlight new comments since: Today at 7:07 AM

Thanks for posting this, I'm glad to see that there are more introductory resources on s-risks

Just want to signal boost the subreddit for s-risk discussion.

Thanks for sharing!

Next steps

Those interested in reducing s-risks can contribute with donations to organizations that prioritize s-risks, such as the Center on Long-Term Risk and the Center for Reducing Suffering, or with their careers. To build a career that helps reduce s-risks, one can learn more about the research fields discussed in Section 4.1, and reach out to the Center on Long-Term Risk or the Center for Reducing Suffering for career planning advice.

I believe this section would benefit from being expanded. For example, you could point to reading lists on relevant topics, potential research topics/agendas, or concrete opportunities to get involved.

Curated and popular this week
Relevant opportunities