Contents

How can we reduce s-risks?

by Tobias Baumann. First published in 2021.

Avoiding futures with astronomical amounts of suffering (s-risks) is a plausible priority from the perspective of many value systems, particularly for suffering-focused views. But given the highly abstract and often speculative nature of such future scenarios, what can we actually do now to reduce s-risks? 

In this post, I’ll give an overview of the priority areas that have been identified in suffering-focused cause prioritisation research to date. Of course, this is subject to great uncertainty, and it could be that the most effective ways to reduce s-risks are quite different from the interventions outlined in the following.

A comprehensive evaluation of each of the main priority areas is beyond the scope of this post, but in general, I have included interventions that seem sufficiently promising in terms of importance, tractability, and neglectedness. I have excluded candidate interventions that are too difficult to influence, or are likely to backfire by causing great controversy or backlash (e.g. trying to stop technological progress altogether). When reducing s-risks, we should seek to find common ground with other value systems; accordingly, many of the following interventions are worthwhile from many perspectives.

Improving our values

Many factors will influence future outcomes — for instance, technological progress, economic dynamics, cooperation problems, political and cultural trends. However, the values of future decision-makers are arguably the most fundamental determinant of how the future will go, which suggests that improving those values is a good lever. (Whether it is the best lever depends on the tractability of social change, the likelihood of a lock-in of values rather than continued value drift, and the time-sensitivity1 of moral advocacy. For more details, see Arguments for and against moral advocacy.)

Moral circle expansion

In particular, ensuring sufficient moral consideration of all sentient beings — expanding the moral circle — is likely a key factor for how well the long-term future will go. By showing and spreading concern for suffering irrespective of time, species, or substrate, we reduce the chance of a massive amount of “voiceless” suffering caused by moral disregard of certain types of beings, like animals or artificial minds. (See also How s-risks could come about.)

However, it is worth noting that a larger moral circle could backfire. Given that, it is crucial to expand the moral circle in particularly thoughtful, sustainable, and prudent ways. In particular, we should take care to avoid overly controversial courses of action that might trigger a serious (possibly permanent) backlash, since severe conflict involving altruistic movements is a possible source of s-risks, and may also hamper future advocacy efforts. 

Efforts to expand the moral circle can take many forms, ranging from advocacy for animal welfare reforms to advancing clean meat. Discussing this in detail is beyond the scope of this post, but in general, it is worth noting that we will likely arrive at different priorities when focusing on s-risk reduction, compared to an emphasis helping animals in the here and now — it would be a remarkable coincidence if short-term-focused work were also ideal from this different perspective.2 (See Longtermism and animal advocacy for more details on this.)

Suffering-focused ethics

In addition to advocating moral consideration of all sentient beings (and to counter the possible downsides of moral circle expansion) we could explore and develop suffering-focused views, as those are particularly likely to prioritise s-risk reduction. In his essay on Reasons to promote suffering-focused ethics, Brian Tomasik argues that this might be a particularly robust and effective strategy, while also exploring possible downsides (such as the risk of zero-sum dynamics). 

More generally, Magnus Vinding argues that moral reflection and greater clarity on our fundamental values is of critical importance to any altruistic endeavour.

For more details on open research questions on suffering-focused ethics, see here.

Politics and governance

Given our vast uncertainty and “nearsightedness” about the future, we should arguably focus on putting society in as good a position as possible to address the challenges of the future — including the challenge of averting potential s-risks. 

One important dimension of this is a functional political system. Efforts to avoid harmful political and social dynamics, and to strengthen democratic governance, therefore plausibly contribute to reducing s-risks. 

Improving our political system

How to achieve “better politics” is, of course, a difficult question. (See here for an overview.) The following aspects seem particularly important from an s-risk perspective:

  • Excessive political polarisation, especially party polarisation in the US, makes it harder to reach consensus or a fair compromise, and undermines trust in public institutions. Tribalism also tends to exacerbate conflicts and therefore constitutes a risk factor for s-risks. Suggested interventions to reduce polarisation include voting reform, public service broadcasting, deliberative citizen’s assemblies, and compulsory voting.
  • Conversely, more charitable and thoughtful political discourse arguably makes severe conflicts (and resulting worst-case outcomes) less likely. Therefore, improving the norms and standards of political debate could be a promising way to reduce s-risk.
  • One plausible s-risk scenario is that the “wrong” individuals and ideologies may become dominant, potentially resulting in a permanent lock-in of a totalitarian power structure. Historical examples of such totalitarian regimes were temporary and localised, but a stable global dictatorship may become possible in the future

Reducing risks from malevolent actors

Problems such as polarisation or totalitarianism are particularly worrisome in combination with particularly bad personality traits in political leaders. Therefore, another way to improve governance is to reduce the influence of such “malevolent” actors, where “malevolence” can be operationalised as the Dark Tetrad traits (psychopathy, machiavellianism, narcissism, and sadism). In Reducing long-term risks from malevolent actors, David Althaus and I argue that malevolent leaders could negatively affect humanity’s long-term trajectory (and could constitute both an s-risk and an x-risk), and consider interventions to reduce the expected influence of malevolent individuals on the long-term future.

While such interventions are mostly aimed at (possibly enhanced) humans, similar considerations could also apply to other future agents such as AI, if they can meaningfully be said to exhibit comparable tendencies. Generally speaking, we may have much more leeway to shape future agents if iterated embryo selection or similar technologies become feasible, or if entirely new classes of minds are created (like ems or AI). Therefore, it’s valuable to work on how these technologies can be used to avert worst-case dynamics and escalating conflicts. 

(This can be seen as a special case of moral enhancement; however, compared to other forms of moral enhancement, reducing malevolent traits is more targeted at s-risk reduction, more robust, and more likely to gain widespread acceptance.)

Maintaining the rule of law

Contemporary societies deal with agential risks by simply enacting and enforcing relevant laws — e.g. a ban on extortion. This appears to be fairly successful, which suggests that we should ensure that future malicious actors will be similarly disincentivised from causing harm.3

It is worth noting that s-risks may arise from future conflicts that take place in radically different circumstances, e.g. involving advanced artificial intelligence or space colonisation. And it seems plausible — though not obvious — that we should worry most about tail scenarios with extraordinarily large amounts of suffering. In those extreme scenarios, a lot of things that we normally take for granted, such as the rule of law, might break down.

One way to prevent such worst-case scenarios is legal research on how relevant existing laws, e.g. regarding threats and extortion, can be made more robust. Specifically, it would be valuable to look into how we can increase the probability that such laws will still apply (and be enforced) in radically different contexts, and how we can also apply comparable regulation at the level of nations, multinational companies, or even entirely new types of future actors.4

Space governance is another potential focus area. We currently lack a coherent global framework for space governance — as of now, space is a free-for-all. This poses a risk of severe conflicts, so it would be valuable to replace the current state of ambiguity with a coherent framework of (long-term) space governance that ensures good outcomes if and when large-scale space colonisation becomes feasible.

Last, we could try to distribute future technological capabilities in a way that makes it difficult to create astronomical amounts of suffering – so that (ideally) no single actor is able to cause an s-risk. That would address both incidental and agential s-risks. However, it is hard to see concrete ways to achieve a safer distribution of future capabilities.

Research on bargaining and escalating conflicts

This class of interventions is somewhat more speculative and mostly targeted at agential s-risks. Agents that threaten to harm other agents, either in an attempt at extortion or as part of an escalating conflict, are one of the most plausible mechanisms for how worst-case outcomes could arise. 

In light of that, we should consider research on how to best prevent such negative-sum dynamics (and achieve more cooperative outcomes) as another potential priority area. This research could focus on theoretical foundations of game theory and decision theory or on finding the best ways to change the circumstances in which future agents will exist so that escalating conflicts can be avoided. (See Research priorities for preventing threats for more details.)

Worst-case AI safety

Transformative artificial intelligence (TAI) may or may not be an exceptionally important lever to shape the long-term future. To the degree that it is, TAI could also result in astronomical suffering, which suggests that AI safety research with a focus on preventing s-risks could be a priority. (This has been termed suffering-focused AI safety or worst-case AI safety.)

Technical and governance interventions to avoid escalating conflicts involving transformative AI systems, and achieve cooperative AI, seem particularly promising for s-risk reduction.5 Jesse Clifton’s Research Agenda on Cooperation, Conflict, and Transformative AI describes potential research avenues in more detail. (See also here for a concrete example of work in this area.)

Surrogate goals

A particularly promising idea is to increase the likelihood that surrogate goals will be implemented successfully in future agents — especially in advanced AI systems — which would deflect (the disvalue resulting from) threats. The idea is to add to one’s current goals a surrogate goal that one did not initially care about, in the hope that threats will target this surrogate goal rather than what one initially cared about. 

See here for possible research directions on surrogate goals.

Capacity building

It is worth reiterating that we face great uncertainty about what the future will look like, about how we can best influence it, and about how we can best reduce s-risks. The above priority areas are some of our current best guesses, but we also believe that there is a strong case for capacity building as a main priority — that is, perhaps the most important thing we can do is to ensure that compassionate agents of the future will be in a better position to reduce suffering.

This entails building a community interested in and knowledgeable about suffering reduction. Given the currently insufficient degree of moral concern for suffering (of all sentient beings), we need to connect with a broader set of people, and to present to them what we consider compelling reasons to take such concerns seriously. 
Moral advocacy aims to contribute to this, yet thinking purely in terms of convincing the largest number of people would be misguided. Capacity building is also about ensuring the long-term stability of the movement, networking with relevant stakeholders, developing and refining our ideas, and establishing healthy social norms. And of course, expanding our knowledge of how to best reduce suffering (“wisdom-building”) is another critical aspect of capacity building. (For more details on this, see our Strategic Plan and our Open Research Questions.)

  1. Time-sensitivity is about whether we can delegate or “pass the buck” to our successors. That is, it’s not clear if it’s especially urgent to expand the moral circle now, as opposed to gathering more information and retaining the option to do so later. This depends on whether we expect a value lock-in or other pivotal events soon. (Similar questions have also been discussed for other interventions.)[]
  2. However, it is less surprising if there is some degree of convergence on a broad category of actions for the near and long-term future. For instance, increasing consideration of neglected beings is a solid heuristic for improving the world, regardless of the timeframe. Also, if the current knowledge and resources of the movement are not yet strongly optimised for maximal short-term impact, there is more room for improvements to both short and long-term impact (e.g., increasing effectiveness in general).[]
  3. This also requires that the relevant laws pertain to all forms of suffering.[]
  4. If laws are not directly applicable — e.g. in conflicts between nations, or some future AI scenarios — we could try to improve norms and conventions that are relevant to agential risks. For instance, UN conventions could outlaw the use of AI for threats, similar to existing discussion on the use of AI for torture.[]
  5. Note that the governance interventions that are most promising need not have much to do with AI directly. For instance, they could have to do with better international relations in general.[]