Hi!
I'm Tobias Baumann, co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings. Ask me anything!
A little bit about me:
I’m interested in a broad range of research topics related to cause prioritisation from a suffering-focused perspective. I’ve written about risk factors for s-risks, different types of s-risks, as well as crucial questions on longtermism and artificial intelligence. My most-upvoted EA Forum post (together with David Althaus from the Center on Long-Term Risk) examines how we can best reduce long-term risks from malevolent actors. I’ve also explored various other topics, including space governance, electoral reform, improving our political system, and political representation of future generations. Most recently, I’ve been thinking about patient philanthropy and the optimal timing of efforts to reduce suffering.
Although I'm most interested in questions related to those areas, feel free to ask me anything. Apologies in advance if there are any questions which, for any of many possible reasons, I’m not able to respond to.
I agree that s-risks can vary a lot (by many orders of magnitude) in terms of severity. I also think that this gradual nature of s-risks is often swept under the rug because the definition just uses a certain threshold (“astronomical scale”). There have, in fact, been some discussions about how the definition could be changed to ameliorate this, but I don’t think there is a clear solution. Perhaps talking about reducing future suffering, or preventing worst-case outcomes, can convey this variation in severity more than the term ‘s-risks’.
Regarding your second question, I wrote up this document a while ago on whether we should focus on worst-case outcomes, as opposed to suffering in median futures or 90th-percentile-badness-futures (given that those are more likely than worst-cases). However, this did not yield a clear conclusion, so I consider this an open question.