Hide table of contents

80000 hours has lots of concrete guides of each cause areas for us to work in, and there's even an estimate the importance of each problem(though they claim it's not very accurate) as below. But they uses x-risks angle to estimate. Like the"Scale" number is determined by the people we save(DALYs), so AI safety got the highest score because its extinction risk is the highest. But in s-risks, the main point should be "the suffering we reduce". There are also lots of areas in s-risks, are there any cause priortization researches on s-risks? 

33

0
0

Reactions

0
0
New Answer
New Comment

4 Answers sorted by

Lukas Gloor at the Center on Long-Term Risk (CLR) wrote a forum post on cause prioritization for s-risks which you might find informative.

CLR argues that suffering-focused EAs should prioritize influencing AI. Their priority areas within that field include:

  • Multi-agent systems
  • AI governance
  • Decision theory and formal epistemology
  • Risks from malevolent actors
  • Cause prioritization and macrostrategy related to s-risks

Thanks for your response, I agree s-risks on AI is very important, but :1. Is there a comparison to say it's more important than with other s-risks areas(like: macrostategy, politics...) In AI area, is it AI sentience or AI safety for humans more important? Thus, what subjects should we work/learn in for s-risks? I've thought of working on "cognitive science", because we have lots of moral uncertainty unsolved(like: the ratio of suffering to happiness, power of hedonic treadmill...) Neuroscience is developing quickly, too. If we can use neuroscience to understand the essence of "consiousness", it can be used in AI/animal sentience, and AI moral alignment. But there seems to be fewer discussions about this?

2
Ariel Simnegar 🔸
Downside-focused views typically emphasize the moral importance of reducing risks of astronomical suffering over AI safety for humans. It's argued that future AIs are likely to have greater total moral weight than future biological beings, due to their potential to pack more sentience per unit of volume, and relative ease of replication. CLR argues that influencing the trajectory of AI in directions which reduce the risks of astronomically bad outcomes is an especially important priority. I personally worry that we put too much emphasis on aligning AI to "modern human values" and not to more future-proof ethics. I think you'd find Alex Turner and Quintin Pope's post on this topic very helpful.

As far as I know, there are no estimates (at least not public ones). But as Stan pointed out, Tobias Baumann has raised some very relevant considerations in different posts/podcasts.

Fwiw, researchers at the Center on Long-Term Risk think AGI conflict is the most concerning s-risk (see Clifton 2019), although it may be hard to comprehend all the details of why they think that if you just read their posts and don't talk to them.

The s-risk people I'm familiar with are mostly interested in worst-case s-risk scenarios that involve vast populations of sentient beings over vast time periods. It's hard to form estimates for the scale of such scenarios, and so the importance is difficult to grasp. I don't think estimating the cost-effectiveness of working on these s-risks would be as simple as measuring in suffering-units instead of QALYs.

Tobias Baumann, for example, mentions in his book and a recent podcast that possibly the most important s-risk work we can do now is simply preparing to be ready in some future time when we will actually be able to do useful stuff. That includes things like "improving institutional decision-making" and probably also moral circle expansion work like curtailing factory farming.

I think Baumann also said somewhere that he can be reluctant to mention specific scenarios too much because it may lead to complacent feeling that we have dealt with the threats: in reality, the greatest s-risk danger is probably something we don't even know about yet.

I hope the above is a fair representation of Baumann's and others' views. I mostly agree with them, although it is a bit shady not to be able to specify what the greatest concerns are. 

I could do a very basic cause-area sense-check of the form:

The greatest s-risks involve huge populations

SO

They probably occur in an interstellar civilisation

AND

Are likely to involve artificial minds (which could probably exist at a far greater density than people)

HENCE

Work on avoiding the worst s-risks is likely to involve influencing whether/how we become a spacefaring civilisation and whether/how we develop and use sentient minds.

Curated and popular this week
Relevant opportunities