The s-risk people I'm familiar with are mostly interested in worst-case s-risk scenarios that involve vast populations of sentient beings over vast time periods. It's hard to form estimates for the scale of such scenarios, and so the importance is difficult to grasp. I don't think estimating the cost-effectiveness of working on these s-risks would be as simple as measuring in suffering-units instead of QALYs.
Tobias Baumann, for example, mentions in his book and a recent podcast that possibly the most important s-risk work we can do now is simply preparing to be ready in some future time when we will actually be able to do useful stuff. That includes things like "improving institutional decision-making" and probably also moral circle expansion work like curtailing factory farming.
I think Baumann also said somewhere that he can be reluctant to mention specific scenarios too much because it may lead to complacent feeling that we have dealt with the threats: in reality, the greatest s-risk danger is probably something we don't even know about yet.
I hope the above is a fair representation of Baumann's and others' views. I mostly agree with them, although it is a bit shady not to be able to specify what the greatest concerns are.
I could do a very basic cause-area sense-check of the form:
The greatest s-risks involve huge populations
SO
They probably occur in an interstellar civilisation
AND
Are likely to involve artificial minds (which could probably exist at a far greater density than people)
HENCE
Work on avoiding the worst s-risks is likely to involve influencing whether/how we become a spacefaring civilisation and whether/how we develop and use sentient minds.
Thanks for your response, I agree s-risks on AI is very important, but :1. Is there a comparison to say it's more important than with other s-risks areas(like: macrostategy, politics...) In AI area, is it AI sentience or AI safety for humans more important? Thus, what subjects should we work/learn in for s-risks? I've thought of working on "cognitive science", because we have lots of moral uncertainty unsolved(like: the ratio of suffering to happiness, power of hedonic treadmill...) Neuroscience is developing quickly, too. If we can use neuroscience to understand the essence of "consiousness", it can be used in AI/animal sentience, and AI moral alignment. But there seems to be fewer discussions about this?