Hi!

I'm Tobias Baumann, co-founder of the Center for Reducing Suffering, a new longtermist research organisation focused on figuring out how we can best reduce severe suffering, taking into account all sentient beings. Ask me anything!

A little bit about me:

I’m interested in a broad range of research topics related to cause prioritisation from a suffering-focused perspective. I’ve written about risk factors for s-risks, different types of s-risks, as well as crucial questions on longtermism and artificial intelligence. My most-upvoted EA Forum post (together with David Althaus from the Center on Long-Term Risk) examines how we can best reduce long-term risks from malevolent actors. I’ve also explored various other topics, including space governance, electoral reform, improving our political system, and political representation of future generations. Most recently, I’ve been thinking about patient philanthropy and the optimal timing of efforts to reduce suffering.

Although I'm most interested in questions related to those areas, feel free to ask me anything. Apologies in advance if there are any questions which, for any of many possible reasons, I’m not able to respond to.

Comments45
Sorted by Click to highlight new comments since: Today at 12:37 PM

To what degree are the differences between longtermists who prioritize s-risks and longtermists who prioritize x-risks driven by moral disagreements about the relative importance of suffering versus happiness, rather than by factual disagreements about the relative magnitude of s-risks versus x-risks?

Most suffering-focused EAs I know agree about the facts: there's a small chance that AI-powered space colonization will create flourishing futures highly optimized for happiness and other forms of moral value, and this small chance of a vast payoff dominates the expected value of the future on many moral views. I think people generally agree that the typical/median future scenario will be much better than the present (for reasons like this one, though there's much more to say about that), though in absolute terms probably not nearly as good as it could be. 

So in my perception, most of the disagreement comes from moral views, not from perceptions of the likelihood or severity of s-risks.

Great question! I think both moral and factual disagreements play a significant role. David Althaus suggests a quantitative approach of distinguishing between the “N-ratio”, which measures how much weight one gives to suffering vs. happiness, and the “E-ratio”, which refers to one’s empirical beliefs regarding the ratio of future happiness and suffering. You could prioritise s-risk because of a high N-ratio (i.e. suffering-focused values) or because of a low E-ratio (i.e. pessimistic views of the future).

That suggests that moral and factual disagreements are comparably important. But if I had to decide, I’d guess that moral disagreements are the bigger factor, because there is perhaps more convergence (not necessarily a high degree in absolute terms) on empirical matters. In my experience, many who prioritise suffering reduction still agree to some extent with some arguments for optimism about the future (although not with extreme versions, like claiming that the ratio is “1000000 to 1”, or that the future will automatically be amazing if we avoid extinction). For instance, if you were to combine my factual beliefs with the values of, say, Will MacAskill, then I think the result would probably not consider s-risks a top priority (though still worthy of some concern).

In addition, I am increasingly thinking that “x-risk vs s-risk” is perhaps a false dichotomy, and thinking in those terms may not always be helpful (despite having written much on s-risks myself). There are far more ways to improve the long-term future than this framing suggests, and we should look for interventions that steer the future in robustly positive directions.

Strong-upvoted this question. Follow-up question: what kind of research could resolve any factual disagreements?

I'm also interested in answers to this question. I'd add the following nit-picky points: 

  • X-risks and s-risks are substantially overlapping categories (in particular, many unrecoverable dystopia scenarios also contain astronomical suffering), so it's possible a more fruitful framing is prioritisation of s-risks vs other x-risks, s-risks in particular vs x-risks as a whole, or s-risks vs extinction risks.
  • There could also be other moral or factual disagreements that help explain differences in the extent to which different longtermists prioritise s-risks relative to other x-risks.
    • In particular, I tentatively suspect that there's a weak/moderate correlation between level of prioritisation of s-risks and level of moral concern for nonhuman animals. 
      • If this correlation exists, I expect it'd be partly a factual disagreement about sentience (and thus arguably a factual disagreement about the relative magnitudes of s- and x-risks),  but also partly a moral disagreement about how much moral weight/moral status animals warrant. 
      • And I'd expect the correlation to partly be "mere correlation", and partly a causal factor in these prioritisation decisions.

One of my most confusing experiences with EA in the last couple of month has been this poll https://www.facebook.com/groups/effective.altruists/permalink/3127490440640625/ where you and your colleauge Magnus stated that one day of extreme suffering (drowning in lava) could not be outweighed by even an (almost) infinite number of days experience extreme happiness (which was the answer with the most upvotes). Some stated in the comments that even a chance of “1 in a gogol probability of 1 minute in lava” could never be outweighed by an (almost) infinite number of days experiencing extreme happiness.

To be honest these sound like extremely strange and unituitive views to me and made me wonder if EAs are different compared to the general population in ways I haven’t much thought about (eg less happy in general). So I have several questions:

1. Do you know about any good articles etc. that make the case for such views?
2. Do you think such or similar views are necessary to prioritize S-Risks?
3. Do you think most people would/should vote in such a way if they had enough time to consider the arguments?
4 For me it seems like people constantly trade happiness for suffering (taking drugs expecting a hangover, eating unhealthy stuff expecting health problems or even just feeling full, finishing that show on Netflix instead of going to sleep… ). Those are reasons for me to believe that most people might not want to compensate suffering through happiness 1:1 , but are also far from expecting 1:10^17 returns or even stating there is no return which potentially could compensate any kind of suffering.


Disclaimer: I haven't spent much time researching S-Risks, so if I got it all wrong (including the poll), just let me know.

Concerning how EA views on this compare to the views of the general population, I suspect they aren’t all that different. Two bits of weak evidence:

I.

Brian Tomasik did a small, admittedly unrepresentative and imperfect Mechanical Turk survey in which he asked people the following:

At the end of your life, you'll get an additional X years of happy, youthful, and interesting life if you first agree to be covered in gasoline and burned in flames for one minute. How big would X have to be before you'd accept the deal?

More than 40 percent said that they would not accept it “regardless of how many extra years of life” they would get (see the link for some discussion of possible problems with the survey).

II.

The Future of Life Institute did a Superintelligence survey in which they asked, “What should a future civilization strive for?” A clear plurality (roughly a third) answered “minimize suffering” — a rather different question, to be sure, but it does suggest that a strong emphasis on reducing suffering is very common.

1. Do you know about any good articles etc. that make the case for such views?

I’ve tried to defend such views in chapter 4 and 5 here (with replies to some objections in chapter 8). Brian Tomasik has outlined such a view here and here.

But many authors have in fact defended such views about extreme suffering. Among them are Ingemar Hedenius (see Knutsson, 2019); Ohlsson, 1979 (review); Mendola, 1990; 2006; Mayerfeld, 1999, p. 148, p. 178; Ryder, 2001; Leighton, 2011, ch. 9; Gloor, 2016, II.

And many more have defended views according to which happiness and suffering are, as it were, morally orthogonal.

2. Do you think such or similar views are necessary to prioritize S-Risks?

As Tobias said: No. Many other views can support such a priority. Some of them are reviewed in chapter 1, 6, and 14 here.

3. Do you think most people would/should vote in such a way if they had enough time to consider the arguments?

I say a bit on this in footnote 23 in chapter 1 and in section 4.5 here.

4 For me it seems like people constantly trade happiness for suffering ... Those are reasons for me to believe that most people ... are also far from expecting 1:10^17 returns or even stating there is no return which potentially could compensate any kind of suffering.

Many things to say on this. First, as Tobias hinted, acceptable intrapersonal tradeoffs cannot necessarily be generalized to moral interpersonal ones (cf. sections 3.2 and 6.4 here). Second, there is the point Jonas made, which is discussed a bit in section 2.4 in ibid. Third, tradeoffs concerning mild forms of suffering that a person agrees to undergo do not necessarily say much about tradeoffs concerning states of extreme suffering that the sufferer finds unbearable and is unable to consent to (e.g. one may endorse lexicality between very mild and very intense suffering, cf. Klocksiem, 2016, or think that voluntarily endured suffering occupies a different moral dimension than does suffering that is unbearable and which cannot be voluntarily endured). More considerations of this sort are reviewed in section 14.3, “The Astronomical Atrocity Problem”, here.

Thanks a lot for the reply and all the links.

4 For me it seems like people constantly trade happiness for suffering (taking drugs expecting a hangover, eating unhealthy stuff expecting health problems or even just feeling full, finishing that show on Netflix instead of going to sleep… ). Those are reasons for me to believe that most people might not want to compensate suffering through happiness 1:1 , but are also far from expecting 1:10^17 returns or even stating there is no return which potentially could compensate any kind of suffering.

 

One counterargument that has been raised against this is that people just accept suffering in order to avoid other forms of suffering. E.g., you might feel bored if you don't take drugs, might have uncomfortable cravings for unhealthy food if you don't eat it, etc.

I do think this point could be part of an interesting argument, but as it stands, it merely offers an alternative explanation without analyzing carefully which of the two explanations is correct. So on its own, this doesn't seem to be a strong counterargument yet.

Thanks for the reply. With regard to drugs I think it depends on the situation. Many people drink alcohol even if they are in a good mood already to get even more excited (while being fully aware that they might experience at least some kind of suffering the next day and possibly long term). In this case I think one couldn't say they do it to avoid suffering (unless you declare everything below the best possible experience suffering). There are obviously other cases were people just want to stop thinking about their problems, stop feeling a physical pain etc.

I don't think that if someone rejects the rationality of trading off neutrality for a combination of happiness and suffering, they need to explain every case of this. (Analogously, the fact that people often do things for reasons other than maximizing pleasure and minimizing pain isn't an argument against ethical hedonism, just psychological hedonism.) Some trades might just be frankly irrational or mistaken, and one can point to biases that lead to such behavior.

I don’t think this view is necessary to prioritise s-risk. A finite but relatively high “trade ratio” between happiness and suffering can be enough to focus on s-risks. In addition, I think it’s more complicated than putting some numbers on happiness vs. suffering. (See here for more details.) For instance, one should distinguish between the intrapersonal and the interpersonal setting - a common intuition is that one man’s pain can’t be outweighed by another’s pleasure.

Another possibility is lexicality: one may contend that only certain particularly bad forms of suffering can’t be outweighed. You may find such views counterintuitive, but it is worth noting that lexicality can be multi-dimensional and need not involve abrupt breaks. It is, for instance, quite possible to hold the view that 1 minute of lava is ‘outweighable’ but 1 day is not. (I think I would not have answered “no amount can compensate” if it was about 1 minute.)

I also sympathise with the view mentioned by Jonas: that happiness matters mostly in so far as an existing being has a craving or desire to experience it. The question, then, is just how strong the desire to experience a certain timespan of bliss is. The poll was just about how I would do this tradeoff for myself, and it just so happens that abstract prospects of bliss does not evoke a very strong desire in me. It’s certainly not enough to accept a day of lava drowning - and that is true regardless of how long the bliss lasts. Your psychology may be different but I don’t think there’s anything inconsistent or illogical about my preferences.

Thanks a lot for the reply and the links.

What are some common misconceptions about the suffering-focused world-view within the EA community?

I would refer to this elaborate comment by Magnus Vinding on a very similar question. Like Magnus, I think a common misconception is that suffering-focused views have certain counterintuitive or even dangerous implications (e.g. relating to world destruction), when in fact those problematic implications do not follow.

Suffering-focused ethics is also still sometimes associated with negative utilitarianism (NU). While NU counts as a suffering-focused view, this often fails to appreciate the breadth of possible suffering-focused views, including pluralist and even non-consequentialist views. Most suffering-focused views are not as ‘extreme’ as pure negative utilitarianism and are far more compatible with widely shared moral intuitions. (Cf. this recent essay for an overview.)

Last, and related to this, there is a common perception of suffering-focused views as unusual or ‘fringe’, when they in fact enjoy significant support (in various forms).

While I agree that problematic implications do not follow in practice, I still think some views have highly counterintuitive implications. E.g., some suffering-focused views would imply that most happy present-day humans would be better off committing suicide if there's a small chance that they would experience severe suffering at some point in their lives. This seems a highly implausible and under-appreciated implication (and makes me assign more credence to views that don't have this implication, such as preference-based and upside-focused views).

It is fair to say that some suffering-focused views have highly counterintuitive implications, such as the one you mention. The misconception is just that this holds for all suffering-focused views. For instance, there are plenty of possible suffering-focused views that do not imply that happy humans would be better off committing suicide. In addition to preference-based views, one could value happiness but endorse the procreative asymmetry (so that lives above a certain threshold of welfare are considered OK even if there is some severe suffering), or one could be prioritarian or egalitarian in interpersonal contexts, which also avoids problematic conclusions about such tradeoffs. (Of course, those views may be considered unattractive for other reasons.)

I think views along these lines are actually fairly widespread among philosophers. It just so happens that suffering-focused EAs have often promoted other variants of SFE that do arguably have implications for intrapersonal tradeoffs that you consider counterintuitive (and I mostly agree that those implications are problematic, at least when taken to extremes), thus giving the impression that all or most suffering-focused views have said implications.

Do you think this is highly implausible even if you account for:

  • the opportunities to reduce other people's extreme suffering that a person committing suicide would forego,
  • the extreme suffering of one's loved ones this would probably increase,
  • plausible views of personal identity on which risking the extreme suffering of one's future self is ethically similar to, if not the same as, risking it for someone else,
  • relatedly, views of probability where the small measure of worlds with a being experiencing extreme suffering are as "real" as the large measure without, and
  • the fact that even non-negative utilitarian views will probably consider some forms of suffering so bad, that small risks of them would outweigh any upsides that a typical human experiences, for oneself (ignoring effects on other people)?

These seem like good objections to me, but overall I still find it pretty implausible. A hermit who leads a happy life alone on an island (and has read lots of books about personal identity and otherwise acquired a lot of wisdom) probably wouldn't want to commit suicide unless the amount of expected suffering in their future was pretty significant.

(I didn't understand, or disagree with, the fourth point.)

[Warning: potentially disturbing discussion of suicide and extreme suffering.]

I agree with many of the points made by Anthony. It is important to control for these other confounding factors, and to make clear in this thought experiment that the person in question cannot reduce more suffering for others, and that the suicide would cause less suffering in expectation (which is plausibly false in the real world, also considering the potential for suicide attempts to go horribly wrong, Humphry, 1991, “Bizarre ways to die”). (So to be clear, and as hinted by Jonas, even given pure NU, trying to commit suicide is likely very bad in most cases, Vinding, 2020, 8.2.)

Another point one may raise is that our intuitions cannot necessarily be trusted when it comes to these issues, e.g. because we have an optimism bias (which suggests that we may, at an intuitive level, wholly disregard these tail risks); because we evolved to prefer existence almost no matter the (expected) costs (Vinding, 2020, 7.11); and because we intuitively have a very poor sense of how bad the states of suffering in question are (cf. ibid., 8.12).

Intuitions also differ on this matter. One EA told me that he thinks we are absolutely crazy for staying alive (disregarding our potential to reduce suffering), especially since we have no off-switch in case things go terribly wrong. This may be a reason to be less sure of one's immediate intuitions on this matter, regardless of what those intuitions might be.

I also think it is important to highlight, as Tobias does, that there are many alternative views that can accommodate the intuition that the suicide in question would be bad, apart from a symmetry between happiness and suffering, or upside-focused views more generally. For example, there is a wide variety of harm-focused views, including but not restricted to negative consequentialist views in particular, that will deem such a suicide bad, and they may do so for many different reasons, e.g. because they consider one or more of the following an even greater harm (in expectation) than the expected suffering averted: the frustration of preferences, premature death, lost potential, the loss of hard-won knowledge, etc. (I say a bit more about this here and here.)

Relatedly, one should be careful about drawing overly general conclusions from this case. For example, the case of suicide does not necessarily say much about different population-ethical views, nor about the moral importance of creating happiness vs. reducing suffering in general. After all, as Tobias notes, quite a number of views will say that premature deaths are mostly bad while still endorsing the Asymmetry in population ethics, e.g. due to conditional interests (St. Jules, 2019; Frick, 2020). And some views that reject a symmetry between suffering and happiness will still consider death very bad on the basis of pluralist moral values (cf. Wolf, 1997, VIII; Mayerfeld, 1996, “Life and Death”; 1999, p. 160; Gloor, 2017; 1, 4.3, 5).

Similar points can be made about intra- vs. interpersonal tradeoffs: one may think that it is acceptable to risk extreme suffering for oneself without thinking that it is acceptable to expose others to such a risk for the sake of creating a positive good for them, such as happiness (Shiffrin, 1999; Ryder, 2001; Benatar & Wasserman, 2015, “The Risk of Serious Harm”; Harnad, 2016; Vinding, 2020, 3.2).

(Edit: And note that a purely welfarist view entailing a moral symmetry between happiness and suffering would actually be a rather fragile basis on which to rest the intuition in question, since it would imply that people should painlessly end their lives if their expected future well-being were just below "hedonic zero", even if they very much wanted to keep on living (e.g. because of a strong drive to accomplish a given goal). Another counterintuitive theoretical implication of such a view is that one would be obliged to end one's life, even in the most excruciating way, if it in turn created a new, sufficiently happy being, cf. the replacement argument discussed in Jamieson, 1984; Pluhar, 1990. I believe many would find these implications implausible as well, even on a purely theoretical level, suggesting that what is counterintuitive here is the complete reliance on a purely welfarist view — not necessarily the focus on reducing suffering over increasing happiness.)

Jonas, I am curious how are you dealing with the above implication?

As I said, mainly by assigning more credence to other views.

[anonymous]4y22
0
0

What is the most likely reason that s-risks are not worth working on?

Apart from the normative discussions relating to the suffering focus (cf. other questions), I think the most likely reasons are that s-risks may simply turn out to be too unlikely, or too far in the future for us to do something about it at this point. I do not currently believe either of those (see here and here for more), and hence do work on s-risks, but it is possible that I will eventually conclude that s-risks should not be a top priority for one of those reasons.

Paul Christiano talks about this question in his 80,000 Hours podcast, mainly saying that s-risks seem less tractable than AI alignment (but also expressing some enthusiasm for working on them).

Does CRS (or do you and Magnus Vinding) have a relatively explicit, shared theory of change? Do you each have somewhat different theories of change, but these are still relatively explicit and communicated between you? Is it less explicit than that? 

Whichever is the case, could you say a bit about why you think that's the case?

(I'm basically just porting these questions over from the AMA with Owen Cotton-Barratt of FHI. I think the questions are slightly less relevant here, given CRS is newer and smaller. But I still find these questions interesting in relation to basically any EA/longtermist research organisations or individual researchers.)

We have thought about this, and wrote up some internal documents, but have not yet published anything (though we might do that at some point, as part of a strategic plan). Magnus and I are quite aligned in our thinking about the theory of change. The key intended outcome is to catalyse a research project on how to best reduce suffering, both by creating relevant content ourselves and by convincing others to share our concerns regarding s-risks and reducing future suffering.

That makes sense, thanks.

Do you have a sense of who you want to take up that project, or who you want to catalyse it among? E.g., academics vs EA researchers, and what type/field? 

And does this influence what you work on and how you communicate/disseminate your work?

How could individual donors best help in reducing suffering and S-risk? How should longtermist suffering-focussed donors approach donating differently than general longermist donors?

One key difference is that there is less money in it, because OpenPhil as the biggest EA grantmaker is not focused on reducing s-risks. In a certain sense, that is good news because work on s-risks is plausibly more funding-constrained than non-suffering-focused longtermism.

In terms of where to donate, I would recommend the Center on Long-Term Risk and the Center for Reducing Suffering (which I co-founded myself). Both of those organisations are doing crucial research on s-risk reduction. If you are looking for something a bit less abstract, you could consider Animal Ethics, the Good Food Institute, or Wild Animal Initiative.

The universe is vast, so it seems there is a lot of room for variation even within the subset of risks involving astronomical quantities of suffering. How much, in your opinion, do s-risks vary in severity? Relatedly, what are your grounds for singling out s-risks as the object of concern, rather than those risks involving the most suffering?

I agree that s-risks can vary a lot (by many orders of magnitude) in terms of severity. I also think that this gradual nature of s-risks is often swept under the rug because the definition just uses a certain threshold (“astronomical scale”). There have, in fact, been some discussions about how the definition could be changed to ameliorate this, but I don’t think there is a clear solution. Perhaps talking about reducing future suffering, or preventing worst-case outcomes, can convey this variation in severity more than the term ‘s-risks’.

Regarding your second question, I wrote up this document a while ago on whether we should focus on worst-case outcomes, as opposed to suffering in median futures or 90th-percentile-badness-futures (given that those are more likely than worst-cases). However, this did not yield a clear conclusion, so I consider this an open question.

What grand futures do suffering-focused altruists tend to imagine? Or in other words, how plausible win conditions look like?

One is the Hedonistic Imperative/suffering abolition: using biotechnology to modify sentient life to not suffer (or not suffer significantly, and ensuring artificial sentience does not suffer?). David Pearce, a negative utilitarian, is the founding figure for this.

David Pearce, a negative utilitarian, is the founding figure for [suffering abolition].

It might be of interest for some that Pearce is/was skeptical about possibility or probability of s-risks related to digital sentience and space colonization: see his reply to What does David Pearce think about S-risks (suffering risks)? on Quora (where he also mentions the moral hazard of "understanding the biological basis of unpleasant experience in order to make suffering physically impossible").

I think a plausible win condition is that society has some level moral concern for all sentient beings (it doesn’t necessarily need to be entirely suffering-focused) as well as stable mechanisms to implement positive-sum cooperation or compromise. The latter guarantees that moral concerns are taken into account and possible gains from trade can be achieved. (An example for this could be cultivated meat, which allows us to reduce animal suffering while accommodating the interests of meat eaters.)

However, I think suffering reducers in particular should perhaps not focus on imagining best-case outcomes. It is plausible (though not obvious) that we should focus on preventing worst-case outcomes rather than shooting for utopian outcomes, as the difference in expected suffering between a worst-case and the median outcome may be much greater than the difference between the median outcome and the best possible future.

[anonymous]4y11
0
0

How did you figure out that you prioritize the reduction of suffering?

I am interested in your personal life story and in the most convincing arguments or intuition pumps?

I was exposed to arguments for suffering-focused ethics from the start, since I was involved with German-speaking EAs (the Effective Altruism Foundation / Foundational Research Institute back then). I don’t really know why exactly (there isn’t much research on what makes people suffering-focused or non-suffering-focused), but this intuitively resonated with me.

I can’t point to any specific arguments or intuition pumps, but my views are inspired by writing such as the Case for Suffering-Focused Ethics, Brian Tomasik’s essays, and writings by Simon Knutsson and Magnus Vinding.

Three related questions, also nicked from the AMA with Owen Cotton-Barratt[1]:

  1. Suppose, in 10 years, CRS has succeeded way beyond what you expected now. What happened?
  2. Suppose that, in 10 years, CRS seems to have had no impact.[2] What signs would reveal this? And what seem the most likely explanations for the lack of impact?
  3. Have you already started trying to assess the quality or impact of CRS's work? Do you have thoughts on how you might do this in future?

(Feel free to answer different versions/framings of the questions instead.)

Obviously these questions are unusually hard in relation to research and for longtermist stuff, and perhaps especially for a relatively new org, so apologies for that. But that's also part of why these questions seem so interesting!

[1] In particular, nicked from/inspired by Neel Nanda, myself, and tamgent.

[2] If there are already good signs of impact, feel free to interpret this as "seems to have had no impact after 2020", or as "seems to have had no impact after 2020, plus the apparent impacts by 2020 all ended up washing out over time".

Re: 1., there would be many more (thoughtful) people who share our concern about reducing suffering and s-risks (not necessarily with strongly suffering-focused values, but at least giving considerable weight to it). That results in an ongoing research project on s-risks that goes beyond a few EAs (e.g., it is also established in academia or other social movements).

Re: 2., a possible scenario is that suffering-focused ideas just never gain much traction, and consequently efforts to reduce s-risks will just fizzle out. However, I think there is significant evidence that at least an extreme version of this is not happening.

Re: 3., I think the levels of engagement and feedback we have received so far are encouraging. However, we do not currently have any procedures in place to measure impact, which is (as you say) incredibly hard for what we do. But of course, we are constantly thinking about what kind of work is most impactful!

Thanks. 

Those answers make sense to me. But I notice that the answer to question 1 sounds like an outcome you want to bring about, but which I wouldn't be way more surprised to observe in a world where CRS doesn't exist/doesn't have impact than one in which it does. This is because it could be brought about by the actions of others (e.g., CLR). 

So I guess I'd be curious about things like:

  • Whether and how you think that that desired world-state will look different if CRS succeeds than if CRS accomplishes very little but other groups with somewhat similar goals succeed
  • How you might disentangle the contribution of CRS to this desired outcome from the contributions of others

I guess this connects to the question of quality/impact assessment as well. 

I also think this dilemma is far from unique to CRS. In fact, it's probably weaker for CRS than for non-suffering-focused longtermists (e.g. much of FHI), because there are currently more of the latter (or at least they control more resources), so there are more plausible alternative candidates for the causes of non-suffering-focused longtermist impacts.

Also, do you think it might make sense for CRS to run a (small) survey about the quality & impact of its outputs?

What kinds of evidence and experience could induce you to update for/against the importance of severe suffering?

Do you believe that exposure to or experience of severe suffering would cause the average EA to focus more heavily on it?

Edit: Moving the question "Thinking counterfactually, what evidence and experiences caused you to have the views you do on severe suffering?" down here because it looks like other commenters already asked another version of it.

I would guess that actually experiencing certain possible conscious states, in particular severe suffering or very intense bliss, could significantly change my views, although I am not sure if I would endorse this as “reflection” or if it might lead to bias.

It seems plausible (but I am not aware of strong evidence) that experience of severe suffering generally causes people to focus more on it. However, I myself have fortunately never experienced severe suffering, so that would be a data point to the contrary.

Hello Tobias, i am very interested by the article you wrote with David Althaus because we are working with UOW in Australia to propose a grant on this topic. I'd love to discuss about this more with both of you, is there a way to contact you more directly ? Thanks a lot ! Juliette 

Thanks! I've started an email thread with you, me, and David.

Curated and popular this week
Relevant opportunities