Hi!
I'm Tobias Baumann, co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings. Ask me anything!
A little bit about me:
I’m interested in a broad range of research topics related to cause prioritisation from a suffering-focused perspective. I’ve written about risk factors for s-risks, different types of s-risks, as well as crucial questions on longtermism and artificial intelligence. My most-upvoted EA Forum post (together with David Althaus from the Center on Long-Term Risk) examines how we can best reduce long-term risks from malevolent actors. I’ve also explored various other topics, including space governance, electoral reform, improving our political system, and political representation of future generations. Most recently, I’ve been thinking about patient philanthropy and the optimal timing of efforts to reduce suffering.
Although I'm most interested in questions related to those areas, feel free to ask me anything. Apologies in advance if there are any questions which, for any of many possible reasons, I’m not able to respond to.
Apart from the normative discussions relating to the suffering focus (cf. other questions), I think the most likely reasons are that s-risks may simply turn out to be too unlikely, or too far in the future for us to do something about it at this point. I do not currently believe either of those (see here and here for more), and hence do work on s-risks, but it is possible that I will eventually conclude that s-risks should not be a top priority for one of those reasons.