Working to reduce extreme suffering for all sentient beings.
Author of Suffering-Focused Ethics: Defense and Implications & Reasoned Politics.
Co-founder (with Tobias Baumann) of the Center for Reducing Suffering (CRS).
ebooks available for free here and here.
Thanks for highlighting that. :)
I agree that this is relevant and I probably should have included it in the post (I've now made an edit). It was part of the reason that I wrote "it is unclear whether this pattern in moral judgment necessarily applies to all or even most kinds of acts inspired by different moral beliefs". But I still find it somewhat striking that such actions seemed to be considered as bad as, or even slightly worse than, intentional harm. But I guess subjects could also understand "intentional harm" in a variety of ways. In any case, I think it's important to reiterate that this study is in itself just suggestive evidence that value differences may be psychologically fraught.
It's not the case that there are N technologies and progress consists solely of improving those technologies; progress usually happens by developing new technologies.
Yeah, I agree with that. :)
But I think we can still point to some important underlying measures — say, "the speed at which we transmit signals around Earth" or "the efficiency with which we can harvest solar energy" — where there isn't much room for further progress. On the first of those two measures, there basically isn't any room for further progress. On the second, we can at the very most see ~a doubling from where we currently are, whereas we have seen more than a 40x increase since the earliest solar cells in the late 1800s. Those are some instances of progress that cannot be repeated (within those domains), even if we create new technologies within these domains.
Of course, there may be other domains that are similarly significant. But I still think the increasing number of domains in which past growth accomplishments cannot be repeated provides a modest reason to doubt a future growth explosion. As noted, I don't think any of the reasons I listed are strong in themselves, but when combined with the other reasons, including the decline in innovations per capita, recent empirical trends in hardware progress, and the point about the potential difficulties of explosive growth given limited efficiency gains I made here, I do think a growth explosion begins to look rather unlikely, especially one that implies >1000 percent annual growth (corresponding to an economy that doubles ~every three months or faster).
I recently asked the question whether anyone had quantified the percent of tasks that computers are superhuman at as a function of time - has anyone?
I'm not aware of any. Though I suppose it would depend a lot on how such a measure is operationalized (in terms of which tasks are included).
This is seriously cherry picked.
I quoted that line of Murphy's as one that provides examples of key technologies that are close to hitting ultimate limits; I didn't mean to say that they were representative of all technologies. :)
But it's worth noting that more examples of technologies that cannot be improved that much further are provided in Gordon's The Rise and Fall of American Growth as well as in Murphy's article. Another example of a limit we've hit is the speed at which we communicate information around Earth, where we also hit the limit (the speed of light) many decades ago. That's a pretty significant step that cannot be repeated, which of course isn't so say that there isn't still much room for progress in many other respects.
On the second point, I was referring to empirical trends we're observing, as well as a theoretical ceiling within the hardware paradigm that has been dominant for the last several decades. It is quite thinkable — though hardly guaranteed — that other paradigms will eventually dominate the silicon paradigm as we know it, which gives us some reason to believe that we might see even faster growth in the future. But I don't think it's a strong reason in light of the recent empirical trends and the not that far off distance to the ultimate limits to computation.
Thanks for your question :)
I might write a more elaborate comment later, but to give a brief reply:
It’s true that Model 2 (defined in terms of those three assumptions) does not rule out significantly higher growth rates, but it does, I believe, make explosive growth quite a lot less likely compared to Model 1, since it does not imply that there’s a single bottleneck that will give rise to explosive growth.
I think most of your arguments for Model 2 also apply to this perspective. The one exception is the observation that growth rates are declining, though this perspective would likely argue that this is because of the demographic transition, which breaks the positive feedback loop driving hyperbolic growth.
I think most of the arguments I present in the section on why I consider Model 2 most plausible are about declining growth along various metrics. And many of the declining trends appear to have little to do with the demographic transition, especially those presented in the section “Many key technologies only have modest room for further growth”, as well as the apparent decline in innovations per capita.
Asserting (as epicurean views do) death is not bad (in itself) for the being that dies is one thing.
But Epicureans tend to defend a stronger claim, namely that there is nothing suboptimal about death — or rather, about being dead — for the being who dies (which is consistent with Epicurean views of wellbeing). I believe this is the view defended in Hol, 2019.
Asserting (as the views under discussion do) that death (in itself) is good
But death is not good in itself on any of the views under discussion. First, death in itself has no value or disvalue on any of these views. Second, using the word “good” is arguably misleading, since death (in terms of its counterfactual effects) can at most be less bad on minimalist views:
The death is only “good” in the sense that, for example, we might say that it was “good” that a moose who had been hit by a car was euthanized (assuming the moose would otherwise have died more painfully). It is more clear and charitable to use the phrase ‘less bad.’
Besides its [i.e. experientialist minimalism’s] divergence from virtually everyone's expressed beliefs and general behaviour
This may be too strong a statement. For instance, it seems that there is a considerable number of Buddhists (and others) who at least express, and aspire to act in alignment with, views centered on the minimization of suffering.
Regardless, I don’t think divergence from most people’s behavior is a strong point against any given axiology. After all, most people’s behavior is inconsistent with impartial axiologies/ethics in general, as well as with classical utilitarian axiology/ethics in particular, even in their prudential concerns. As one would expect, we mostly seem to optimize for biological and social drives rather than for any reflectively endorsed axiology.
For the sake of a less emotionally charged variant of Mathers' example, responses to the Singer's shallow pond case along the lines of, "I shouldn't step in, because my non-intervention is in the child's best interest: the normal life they could 'enjoy' if they survive accrues more suffering in expectation than their imminent drowning" appear deranged.
As Teo has essentially spent much of his sequence arguing, minimalist axiologies would strongly agree that such a response is extremely implausible in any practically relevant case, for many reasons: it overlooks the positive roles of the child’s continued existence, the utility of strong norms of helping and protecting life, the value of trying to reduce clear and present suffering, etc. (Just to clarify that important point. I realize that there likely was an implicit ‘other things equal’ qualification in that thought experiment, but it’s arguably critical to make that radical assumption explicit.)
Additionally, minimalist axiology is compatible with moral duties or moral rights that would require us to help and protect others, which is another way in which someone who endorses an experientialist minimalist axiology may agree that it is wrong not to help.
In any case, the thought experiment above seems to ignore the question of comparative repugnance. For starters, a contrasting axiology such as CU would imply that it would be better to let the child drown (in the other things equal, isolated case) if the rest of the child’s life were going to be overall slightly “net negative” otherwise (as we can stipulate that it would be in the hypothetical case we're considering). This also seems repugnant.
Yet CU is subject to far more repugnant implications of this kind. For example, assume that other things are equal, and imagine that we walk past a person who is experiencing the most extreme suffering — suffering so extreme that the sufferer in that moment will give anything to make it stop. Imagine that we can readily step in and stop this suffering, in which case the person we are saving will live an untroubled life for the rest of their days. Otherwise, the sufferer will continue to experience extreme, incessant suffering, followed, eventually, by a large amount of bliss that according to CU would outweigh the suffering. (This is somewhat analogous to the first thought experiment found here.)
CU would say that it is better to leave that person to continue to be tormented for the sake of the eventual bliss, even though the person would rather be freed from the extreme suffering while in that state.
Is that a less repugnant implication?
In general, it seems important to compare the repugnant conclusions of different views. And as Teo has recently argued, when we compare the most repugnant conclusions of different views in population ethics, minimalist views are arguably less repugnant than offsetting views.
Varieties of experientialist minimalist views that are overlooked in this piece
I think the definition of experientialist minimalism employed in the post is in need of elaboration, as it seems that there are in fact minimalist experientialist views that would not necessarily have the implications that you inquire about, yet these views appear to differ from the experientialist minimalist views considered in the post.
To give an example, one could think that what matters is only the reduction of experiential disvalue (and thereby be an experientialist minimalist), but then further hold that the disvalue of the experiences in a life must be evaluated in light of the total experiential contents of a life, as opposed to evaluating it in reductionist terms that allow us to always judge the disvalue of individual experiences in isolation, without regard to context. (I suspect that Teo implicitly assumes the latter view, though he can obviously best answer for himself.)
In particular, one could hold that a painful experience of toiling toward some end will have greater disvalue if the end goal is never realized (these experientialist views could thus have a lot in common with preference-based views). Or one could think that painful experiences in a life that does not contain certain experiences later, e.g. experiences of learning, are worse than otherwise, even if those ameliorating experiences were never desired by the subject (these experientialist views could thus also have a lot in common with objective-list views). This approach to evaluating experiences seems similar to how some views will assign negative value to sadistic pleasures, in that context is taken to matter to the value or disvalue of experiences.
Regardless of whether these ‘context-sensitive’ experientialist views are plausible, it seems that, as a conceptual matter, the post would have benefited from clarifying whether they are excluded from its scope, as they seem to be (I share responsibility for this omission, since I gave extensive feedback on the post). [ETA: The post now does include a note on this, "Relatedly, I further assume ..."]
(As for the substantive plausibility of such ‘context-sensitive’ experientialist minimalist views, I’m not sure whether such views would fare better or worse than, say, preference-based views, but then I admittedly haven’t thought much about their comparative strengths and weaknesses.)
The Epicurean view: Existing defenses
On the general question of whether beings can be harmed by death, it’s worth noting that this is, of course, a question that has been elaborately discussed in the literature. And the view that death cannot be bad for the being who dies — i.e. the Epicurean view — has been defended in modern times in, for example, Rosenbaum, 1986 and Hol, 2019. (But again, it's worth stressing that death can still be extremely bad for instrumental reasons, even if one grants the Epicurean view.)
Lastly, concerning the plausibility of experientialist views that assign (dis)value to individual experiences in isolation, without regard to context, I think one can reasonably argue that minimalist versions of these views still overall have less repugnant implications than do the contrasting offsetting experientialist views (see e.g. here and here). (The comparison between minimalist and offsetting experientialist views seems relevant here since experientialist views appear to be popular in EA, cf. the EA Survey; I realize that experientialist offsetting views weren’t claimed to be more plausible than experientialist minimalist views in the comment above.)
In particular, if we use emotive examples in order to stress-test minimalist versions of these views, then we should presumably also be willing to use emotive examples to stress-test corresponding offsetting views, such as examples involving, say, a group associated with atrocious evil that forces individuals into experience machines in which these individuals experience a large amount of bliss after which they are tortured to death, yet where the bliss outweighs the torture on the offsetting view under consideration. It likewise seems repugnant — in my view considerably more repugnant — to think that such a group of actors would be doing the forced individuals a favor.
So among experientialist views, it seems plausible (to me at least) to claim that minimalist views are the least repugnant, all things considered. And the same arguably goes for preference-based axiological views (as argued here and here).
>The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172)
I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”, “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.
The arguments presented against the Asymmetry in the section “The Intuition of Neutrality” are the ones I criticize in the post. The core claims defended in “Clumsy Gods” and “Why the Intuition of Neutrality Is Wrong” are, as far as I can tell, relative claims: it is better to bring Bob/Migraine-Free into existence than Alice/Migraine, because Bob/Migraine-Free would be better off. Someone who endorses the Asymmetry may agree with those relative claims (which are fairly easy to agree with) without giving up on the Asymmetry.
Specifically, one can agree that it’s better to bring Bob into existence than to bring Alice into existence while also maintaining that it would be better if Bob (or Migraine-Free) were not brought into existence in the first place. Only “The Intuition of Neutrality” appears to take up this latter question about whether it can be better to start a life than to not start a life (purely for its own sake), which is why I consider the arguments found there to be the main arguments against the Asymmetry.
If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion lives of bliss.
It seems worth separating purely axiological issues from issues in decision theory that relate to tiny probabilities. Specifically, one might think that this thought experiment drags two distinct issues into play: questions/intuitions relating to value lexicality, and questions/intuitions relating to tiny probabilities and large numbers. I think it’s ideal to try to separate those matters, since each of them are already quite tricky on their own.
To make the focus on the axiological question clearer, we may “actualize” the thought experiment such that we’re talking about either preventing a lifetime of the most extreme unmitigated torture or creating a trillion, trillion, trillion, trillion lives of bliss.
The lexical view says that it is better to do the former. This seems reasonable to me. I do not think there is any need or ethical duty to create lives of bliss, let alone an ethical duty to create lives of bliss at the (opportunity) cost of failing to prevent a lifetime of extreme suffering. Likewise, I do not think there is anything about pleasure (or other purported goods) that render them an axiological counterpart to suffering. And I don’t think the numbers are all that relevant here, any more than thought experiments involving very large numbers of, say, art pieces would make me question my view that extreme suffering cannot be outweighed by many art pieces.
Regarding moral uncertainty: As noted in the final section above, there are many views that support granting a foremost priority to the prevention of extreme suffering and extremely bad lives. Consequently, even if one does not end up with a strictly lexical view at the theoretical level, one may still end up with an effectively lexical view at the practical level, in the sense that the reduction of extreme suffering might practically override everything else given its all-things-considered disvalue and expected prevalence.
I talk about the asymmetry between goods and bads in chapter 9 on the value of the future in the section “The Case for Optimism”, and I actually argue that there is an asymmetry: I argue the very worst world is much more bad than the very best world is good.
But arguing for such an asymmetry still does not address questions about whether or how purported goods can morally outweigh extreme suffering or extremely bad lives.
Of course, there's only so much one can do in a single chapter of a general-audience book, and all of these issues warrant a lot more discussion than I was able to give!
That is understandable. But still, I think overly strong conclusions were drawn in the book based on the discussion that was provided. For instance, Chapter 9 ends with these words:
All things considered, it seems to me that the greater likelihood of eutopia is the bigger consideration. This gives us some reason to think the expected value of the future is positive. We have grounds for hope.
But again, no justification has been provided for the view that purported goods can outweigh severe bads, such as extreme suffering, extremely bad lives, or vast numbers of extremely bad lives. Nor do I think the book addresses the main points made in Anthony DiGiovanni’s post A longtermist critique of “The expected value of extinction risk reduction is positive”, which essentially makes a case against the final conclusion of Chapter 9.
[the view that intrinsically positive lives do not exist] implies that there wouldn't be anything wrong with immediately killing everyone reading this, their families, and everyone else, since this supposedly wouldn't be destroying anything positive.
This is not true. The view that killing is bad and morally wrong can be, and has been, grounded in many ways besides reference to positive value.
First, there are preference-based views according to which it would be bad and wrong to thwart preferences against being killed, even as the creation and satisfaction of preferences does not create positive value (cf. Singer, 1980; Fehige, 1998). Such views could imply that killing and extinction would overall be bad.
Second, there are views according to which death itself is bad and a harm, independent of — or in addition to — preferences against it (cf. Benatar, 2006, pp. 211-221).
Third, there are views (e.g. ideal utilitarianism) that hold that certain acts such as violence and killing, or even intentions to kill and harm (cf. Hurka, 2001; Knutsson, 2022), are themselves disvaluable and make the world worse.
Fourth, there are nonconsequentialist views according to which we have moral duties not to harm or kill, and such duties may be combined with a wide range of axiologies, including those that deny positive intrinsic value. ("For deontologists, a killing is a wrong under most circumstances, and its wrongness does not depend on its consequences or its effects on overall welfare." Sunstein & Vermeule, 2005.) Such duties can, yet need not, rest on a framework of moral rights.
As for experientialist minimalist views in particular (i.e. views that say that the reduction of experienced bads is all that matters), I would highly recommend reading Teo Ajantaival's essay Peacefulness, nonviolence, and experientialist minimalism. It provides an elaborate discussion of cessation/non-creation implications from the perspective of that specific class of minimalist views.
Teo's post also makes the important point that offsetting consequentialist views (e.g. classical utilitarianism) arguably have worse theoretical cessation implications than do minimalist experientialist views (see also the footnote below). Last but not least, the post highlights the importance of distinguishing purely hypothetical questions from practical questions, and highlights the strong reasons to not only pursue a cooperative approach, but also ("as far as is possible and practicable") a nonviolent and nonaggressive approach.
these semi-nihilistic views
I would strongly resist that characterization. For instance, a Buddhist axiology focused on the alleviation of suffering and unmet needs on behalf of all sentient beings is, to my mind at least, the Kryptonite opposite of nihilism. Its upshot, in essence, is that it recommends us to pursue a deeply meaningful and compassionate purpose, aimed at alleviating the burdens of the world. Indeed, not only do I find this to be positively anti-nihilistic, but also supremely beautiful.
(Perhaps also see this post, especially the final section on meaning and motivation.)
as long as we're reviving old debates, readers may be interested in Toby Ord's arguments against many of these views (and e.g. this response).
I have recently written a point-by-point reply to Ord's essay.
And, FWIW, I think reference to positive value is not a promising way to ground the view that killing is wrong. As many have noted, moral views that ground the wrongness of killing purely in, say, the loss of pleasurable experiences tend to be vulnerable to elimination arguments, which say that we should, at least in theory, kill people if we can replace them with happier beings.
Thus, to borrow from your comment (in bold), one could likewise make the following claim about classical utilitarianism:
"It implies that there wouldn't be anything wrong with immediately killing everyone reading this, their families, and everyone else, if we could in turn create isolated matrix lives that experience much more pleasure. Indeed, unlike suffering-focused views, classical utilitarianism would allow each of these killings to involve vast amounts of unrelenting torture, provided that 'sufficiently many' happy matrix lives are created in turn."
I take this to be a worse implication. Of course, a classical utilitarian would be quick to highlight many nuances and caveats here, and not least to highlight the hypothetical nature of this scenario. But such points will generally also apply in the case of experientialist minimalist views.
Thanks for your question, Michael :)
I should note that the main thing I take issue with in that quote of MacAskill's is the general (and AFAICT unargued) statement that "any argument for the first claim would also be a good argument for the second". I think there are many arguments about which that statement is not true (some of which are reviewed in Gloor, 2016; Vinding, 2020, ch. 3 ; Animal Ethics, 2021).
As for the particular argument of mine that you quote, I admit that a lot of work was deferred to the associated links and references. I think there are various ways to unpack and support that line of argument.
One of them rests on the intuition that ethics is about solving problems (an intuition that one may or may not share, of course). If one shares that moral intuition, or premise, then it seems plausible to say that the presence of suffering or miserable lives amounts to a problem, or a problematic state, whereas the absence of pleasure or pleasurable lives does not (other things equal) amount to a problem for anyone, or to a problematic state. That line of argument (whose premises may be challenged, to be sure) does not appear "flippable" such that it becomes a similarly plausible argument in favor of any supposed goodness of creating a happy life.
Alternatively, or additionally, one can support this line of argument by appealing to specific cases and thought experiments, such as the following (sec. 1.4):
we would rightly rush to send an ambulance to help someone who is enduring extreme suffering, yet not to boost the happiness of someone who is already doing well, no matter how much we may be able to boost it. ... Similarly, if we were in the possession of pills that could raise the happiness of those who are already happy to the greatest heights possible, there would be no urgency in distributing these pills, whereas if a single person fell to the ground in unbearable agony right before us, there would indeed be an urgency to help.
... if a person is in a state of dreamless sleep rather than a state of ecstatic happiness, this cannot reasonably be characterized as a disaster or a catastrophe. The difference between these two states does not carry great moral weight. By contrast, the difference between sleeping and being tortured does carry immense moral weight, and the realization of torture rather than sleep would indeed amount to a catastrophe. Being forced to endure torture rather than dreamless sleep, or an otherwise neutral state, would be a tragedy of a fundamentally different kind than being forced to “endure” a neutral state instead of a state of maximal bliss.
These cases also don't seem "flippable" with similar plausibility. And the same applies to Epicurean/Buddhist/minimalist views of wellbeing and value.
An alternative is to speak in terms of urgency vs. non-urgency, as Karl Popper, Thomas Metzinger, and Jonathan Leighton have done, cf. Vinding, 2020, sec. 1.4.
It might also be worth distinguishing stronger and weaker asymmetries in population ethics. Caviola et al.'s main study indicates that laypeople on average endorse at least a weak axiological asymmetry (which becomes increasingly strong as the populations under consideration become larger), and the pilot study suggests that people in certain situations (e.g. when considering foreign worlds) tend to endorse a rather strong one, cf. the 100-to-1 ratio.