Working to reduce extreme suffering for all sentient beings. Author of 'Suffering-Focused Ethics: Defense and Implications'.
The way I think about it, when I'm suffering, this is my brain subjectively "disvaluing" (in the sense of wanting to end or change it) the state it's currently in.
This is where I see a dualism of sorts, at least in the way it's phrased. There is the brain disvaluing (as an evaluating subject) the state it's in (where this state is conceived of as an evaluated object of sorts). But the way I think about it, there is just the state your mind-brain is in, and the disvaluing is part of that mind-brain state. (What else could it be?)
This may just seem semantic, but I think it's key: the disvaluing, or sense of disvalue, is intrinsic to that state. It relates back to your statement that reality simply is, and interpretation adds something to it. To which I'd still say that interpretations, including disvaluing in particular, are integral parts of reality. They are intrinsic to the subset of reality that is our mind-brains.
This is not the same as saying that there exists a state of the world that is objectively to be disvalued.
I think it's worth clarifying what the term "objectively" means here. Cf. my point above, I think it's true to say that there is a state of the world that is disvalued, and hence disvaluable according to that state itself. And this is true no matter where in the universe this state is instantiated. In this sense, it is objectively (i.e. universally) disvaluable. And I don't think things change when we introduce "other" individuals into the picture, as we discussed in the comments on your first post in this sequence (I also defended this view at greater length in the second part of my book You Are Them).
I talk about notions like 'life goals' (which sort of consequentialist am I?), 'integrity' (what type of person do I want to be?), 'cooperation/respect' (how do I think of the relation between my life goals and other people's life goals?), 'reflective equilibrium' (part of philosophical methodology), 'valuing reflection' (the anti-realist notion of normative uncertainty), etc.
Ah, I think we've talked a bit past each other here. My question about bedrock concepts was mostly about why you would question them in general (as you seem to do in the text), and what you think the alternative is. For example, it seems to me that the notions you consider foundational in your ethical perspective in particular do in turn rest on bedrock concepts that you can't really explain more reductively, i.e. with anything but synonymous concepts ("goals" arguably being an example).
From one of your replies to MichaelA:
I should have chosen a more nuanced framing in my comment. Instead of saying, "Sure, we can agree about that," the anti-realist should have said "Sure, that seems like a reasonable way to use words. I'm happy to go along with using moral terms like 'worse' or 'better' in ways where this is universally considered self-evident. But it seems to me that you think you are also saying that for every moral question, there's a single correct answer [...]"
It seems to me your conception of moral realism conflates two separate issues:
1. Whether there is such a thing as (truly) morally significant states, and
2. Whether there is a single correct answer for every moral question.
I think these are very different questions, and an affirmative answer to the former need not imply an affirmative answer to the latter. That is, one can be a realist about 1. while being a non-realist about 2.
For example, one can plausibly maintain that a given state of suffering is intrinsically bad and ought not exist without thinking that there is a clear answer, even in principle, concerning whether it is more important to alleviate this state or some other state of similarly severe suffering. As Jamie Mayerfeld notes, even if we think states of suffering occupy a continuum of (genuine) moral importance, the location of any given state of suffering on this continuum "may not be a precise point" (Mayerfeld, 1999, p. 29). Thus, one can be a moral realist and still embrace vagueness in many ways.
I think it would be good if this distinction were more clear in this discussion, and if these different varieties of realism were acknowledged. After all, you seem quite sympathetic to some of them yourself.
Thanks for sharing your reflections :-)
This is because of imagining and seeing examples as in the book and here.
Just wanted to add a couple of extra references like this:
The Seriousness of Suffering: Supplement
The Horror of Suffering
Preventing Extreme Suffering Has Moral Priority
To be more specific, I think that one second of the most extreme suffering (without subsequent consequences) would be better than, say, a broken leg.
Just want to note, also for other readers, that I say a bit about such sentiments involving "one second of the most extreme suffering" in section 8.12 in my book. One point I make is that our intuitions about a single second of extreme suffering may not be reliable. For example, we probably tend not to assign great significance, intuitively, to any amount of one-second long chunks of experience. This is a reason to think that the intuition that one second of extreme suffering can't matter that much may not say all that much about extreme suffering in particular.
If that holds, than any extreme suffering can be overcome by mild suffering.
I think this is a little too quick, at least in the way you've phrased it. A broken leg hardly results in merely mild suffering, at least by any common definition. And a lexical threshold has, for example, been defended between "mere discomfort" and "genuine pain" (see Klocksiem, 2016), where a broken leg would clearly entail the latter.
There are also other reasons why this argument (i.e. "one second of extreme suffering can be outweighed by mild suffering, hence any amount of extreme suffering can") isn't valid.
Note also that even if one thinks that aggregates of milder forms of suffering can be more important than extreme suffering in principle, one may still hold that extreme suffering dominates profusely in practice, given its prevalence.
Now, many people would trade mild tradeoff for other things they hold important.
I just want to flag here that the examples you give seem to be intrapersonal ones, and the permissibility of intrapersonal tradeoffs like these (which is widely endorsed) does not imply the permissibility of similar tradeoffs in the interpersonal case (which more people would reject, and which there are many arguments against, cf. chapter 3).
The following is neither a request nor a complaint, but in relation to the positions you express, I see little in the way of counterarguments to, or engagement with, the arguments I've put forth in my book, such as in chapters 3 and 4, for example. In other words, I don't really see the arguments I present in my book addressed here (to be clear, I'm not claiming you set out to do that), and I'm still keen to see some replies to them.
Thanks for your comment. I appreciate it! :-)
In relation to counterintuitions and counterarguments, I can honestly say that I've spent a lot of time searching for good ones, and tried to include as many as I could in a charitable way (especially in chapter 8).
I'm still keen to find more opposing arguments and intuitions, and to see them explored in depth. As hinted in the post, I hope my book can provoke people to reflect on these issues and to present the strongest case for their views, which I'd really like to see. I believe such arguments can help advance the views of all of us toward greater levels of nuance and sophistication.
Thanks for your comment, Michael :-)
What I was keen to get an example of was mainly this (omitted in the text you quoted above):
Also, whenever there was a problem with an argument, Magnus can retreat to a less demanding version of Suffering-Focused Ethics, which makes it more difficult for the reader to follow the arguments.
That is, an example of how I retreat from the main position I defend (in chapters 4 and 5), such as by relying on the views of other philosophers whose premises I haven't defended. I don't believe I do that anywhere. Again, what I do in some places is simply to show that there are other kinds of suffering-focused views one may hold; I don't retreat from the view I in fact hold.
It's true that I do mention the views of many different philosophers, and note how their views support suffering-focused views, and in some cases I merely identify the moral axioms, if you will, underlying these views. I then leave it to the reader to decide whether these axioms are plausible (this is a way in which I in fact do explain/present views rather than try to "persuade"; chapter 2 is very similar, in that it also presents a lot of views in this way).
It seems that Shiffrin and Parfit did, for example, consider their respective principles rather axiomatic, and provided little to no justification for them (indeed, Parfit considered his compensation principle "clearly true", https://web.archive.org/web/20190410204154/https://jwcwolf.public.iastate.edu/Papers/JUPE.HTM ). Mill's principle was merely mentioned as one that "can be considered congruent" with a conclusion I argued for; I didn't rely on it to defend the conclusion in question.
Thanks for sharing your review. A few comments:
Concerning the definition of suffering, I do actually provide a definition: an overall bad feeling, or state of consciousness (as I note, I here follow Mayerfeld, 1999, pp. 14-15). One may argue that this is not a particularly reductive definition, and I say the same in a footnote:
One cannot, I submit, define suffering in more precise or reductive terms than this. For just as one cannot ultimately define the experience of, say, phenomenal redness in any other way than by pointing to it, one cannot define a bad overall feeling, i.e. suffering, in any other way than by pointing to the aspect of consciousness it refers to.
I think that he made a deliberate choice to focus on capturing a wide range of views and defenses instead of going deep into defending one view.
Partly. I would say I both tried to make a broad case and defend a specific view, namely the view(s) I defend in chapters 4 and 5 (they aren't quite identical, but I'd say they are roughly equivalent at the level of normative ethics).
In Chapter 5 Magnus explains his position regarding suffering, but throughout the first part he does not rely on that in order to make a case for suffering focused ethics. Instead, he loads philosophical ammunition from all over the suffering-focused ethics coalition and shoots them at every obstacle in sight.
That's not quite how I see it (though it's true that I don't rely strongly on the meta-ethical view defended in chapter 5). My own view, including chapter 5 in particular, is not really isolated from the arguments I make in the preceding chapters. I see most of the arguments outlined in previous chapters as lending support to the arguments made in chapter 5, and I indeed explicitly cite many of them there.
Many of the arguments are of the form "philosopher X thinks that Y is true", but without appropriate arguments for Y. Also, whenever there was a problem with an argument, Magnus can retreat to a less demanding version of Suffering-Focused Ethics, which makes it more difficult for the reader to follow the arguments.
I'd appreciate some examples (or just one) of this. :-)
I don't think I at any point retreat from the view I defend in chapters 4 and 5. But I do explain how one can hold other suffering-focused views (e.g. pluralist ones, such as those defended by Wolf and Mayerfeld).
My major issue with this book is that it feels heavily biased. I felt that I was being persuaded, not explained to.
I did seek to explain the arguments and considerations that have led me to hold a suffering-focused view, and I do happen to find these arguments persuasive.
I wonder what you think I should have done differently, and whether you can refer me to a book defending a moral view in a way that was more "explaining".
It feels that Magnus offers no major concessions, related to the point above that there is always a line of retreat.
What major concessions do you feel I should make? My view is that it cannot be justified to create purported positive goods at the price of extreme suffering, and it would be dishonest for me to claim that I've found a persuasive argument against this view. But I'm keen to hear any counterargument you find persuasive.
In chapter 7, there are a long list of possible biases that prevent us from accepting Suffering-Focused Ethics.
This is not quite accurate, and I should have made this clearer. :-)
As I say at the beginning of this chapter, I here "present various biases against giving suffering its due moral weight and consideration." This is not the same as (only) presenting biases against suffering-focused moral views in particular. One can be a classical utilitarian and still think that most, perhaps even all, of the biases mentioned in this chapter plausibly bias us against giving sufficient priority to suffering.
For example, a classical utilitarian can agree that we tend to shy away from contemplating suffering (7.2); that we underestimate how bad suffering often is (7.4); that we underestimate and ignore our ability to reduce suffering, in part because of omission bias (7.5); that we have a novelty bias and scope insensitivity (7.6); that we have a perpetrator bias that leads us to dismiss suffering not caused by moral agents (7.7); that the Just World Fallacy leads us to dismiss others' suffering (7.8); that we have a positivity and an optimism bias (7.9); that a craving for certain sources of pleasure, e.g. sex and status, can distort our judgments (7.10); that we have an existence bias — widespread resistance against euthanasia is an example — (7.11); that suffering is a very general phenomenon, which makes it difficult for us to make systematic and effective efforts to prevent it (7.13); etc.
I'd actually say that most of the biases reviewed are not biases against accepting suffering-focused moral views, but rather biases against giving the priority to reducing suffering that the values we already hold would require. I should probably have made this more clear (I say a bit more on this in the second half of section 12.3).
and really the biggest flaw for me was that there was no analogous comparison with possible biases [favoring] Suffering-Based Ethics.
But there was in fact a section on this: 7.15. If you feel I've missed some important considerations, I'm keen to hear about them.
Also, in Chapter 8 Magnus presents many arguments against his views, each a couple of sentences, and spends the majority of the time making counterarguments and half-hearted concessions.
I wonder what you mean by "half-hearted concessions", and why you think they are half-hearted. Also, it's not true that "each [counterargument is] a couple of sentences", even as most are stated very concisely.
Instead of acknowledging reasonable ethical views that may oppose Suffering-Focused Ethics, there is an attempt at convincing the readers that there is still some way of reducing suffering that they should prefer.
As mentioned above, my view is that it cannot be justified to create purported positive goods at the price of extreme suffering. I cannot honestly say that I find views that would have us increase extreme suffering in order to increase, say, pleasure to be reasonable. So again, all I can say is that I'd invite you to present and defend the views that you think I should acknowledge as reasonable.
After reading this book, it is clearer to me that I find extreme suffering very bad
I'm glad to hear that. Helping people clarify their views of the significance of extreme suffering is among the main objectives of the book.
but that in general I tend to think suffering can be outweighted.
This is then where I, apropos your complaint about a lack of "appropriate arguments" for a stated premise, would ask for some arguments: how and why can extreme suffering be outweighed? What counterarguments would you give to the arguments presented in, say, chapters 3 and 4?
Also, I was worried before reading the book that there is an inherent difficulty in cooperation between suffering-focused ethical systems and aspirations for more (happy) people to exist. I still think that's somewhat the case but it is clearer that these differences can be overcome and that one can value both.
Pleased to hear this. The second part of the book should lend even more support to that view. I very much hope we can all cooperate closely rather than fall victim to tribal psychology, as difficult as that can be. As I note in chapter 10, disagreeing on values is arguably a strong catalyst for outgroup perception. Let's resist falling prey to that.
Thanks again for taking the time to read and review the first part of the book. :-)
Thanks for your question, Niklas. It's an important one.
The following link contains some resources for sustainable activism that I've found useful:
But specifically, it may be useful to cultivate compassion — the desire for other beings to be free from suffering — more than (affective) empathy, i.e. actually feeling the feelings of those who suffer.
Here is an informative conversation about it: https://www.youtube.com/watch?v=CJ1SuKOchps
As I write in section 9.5 (see the book for references):
Research suggests that these meditation practices [i.e. compassion and loving-kindness meditation] not only increase compassionate responses to suffering, but that they also help to increase life satisfaction and reduce depressive symptoms for the practitioner, as well as to foster better coping mechanisms and increased positive affect in the face of suffering.
Normative ethics: There’s a sense in which consequentialist obligations to avoid purchasing meat from factory-farmed animals are “real.” But we could also take a different perspective (according to which morality is about hypothetical contracts between people), in which case we’d see no obligations toward animals.
Realists of course agree that we can take another perspective, and that this can be fruitful, but the crucial issue for the realist is whether one perspective is ultimately more valid, or true, than others (as you hint further down). This is where I think the duck-rabbit analogy breaks down for most realists: no one is tempted to claim that one interpretation of the image is more valid or true than the other; there seems no compelling reason for believing that. But when it comes to ethics, realists maintain that we do have truly compelling reasons to prefer, for example, a view that says "minimize suffering" rather than "maximize suffering". The realist will deny that both are valid "interpretations" of ethics.
According to my anti-realist perspective, reality simply is, but interpretations always add something. Deep down, all interpretations are arbitrary and we can always take on the “stubborn” perspective to say that there’s not even a question that needs to be answered.
I see a duality in such an anti-realist view in that interpretations appear to be thought of as something separate from reality. I think this is a mistake — especially relative to qualia-based moral and value realism — and I think it is a mistake to consider all interpretations arbitrary: it is not arbitrary to consider suffering bad. Indeed, interpretation (broadly defined) is arguably intrinsic to, and in some sense constitutive of, suffering itself (cf. Aydede, 2014).
Of course, if we define reality as everything devoid of interpretive features, and consider all evaluations to be separate from reality, then I'd agree that there obviously are no valid evaluations to be found in "reality". But then I think we're working with a very impoverished conception of reality, and certainly one that many realists about value and ethics would reject. This might be a crux in terms of how people think differently about these things.
I see a similar duality in the dichotomy drawn between "expressions of attitudes" versus "statements of fact" in the outline of non-cognitivism. Many moral realist views (e.g. the views defended in Pearce, 1995; Hewitt, 2008) are all about the "attitudes" (in a broad sense) of sentient beings, and such views consider the possibility space of these "attitudes" a domain of facts. Such views are not really "speaker-dependent", as in merely being about the attitudes of the people who hold these moral views, but are rather about the "attitudes" of all sentient beings. I think you've expressed a similar view/definition of qualia in the past — one that excludes evaluations and preferences, unlike the conception of qualia employed by most of those who defend value or moral realist views based on qualia.
In the context of bedrock concepts, it's not clear to me why such concepts should be considered problematic. After all, what is the alternative? An infinite regress of concepts? A circular loop? Having bedrock concepts seems to me the least problematic — indeed positively plausible — option.
Laying out what constitutes philosophical progress then becomes a bedrock concept as well
I don't see how that follows. Accepting bedrock concepts need not imply that the most plausible conception of philosophical progress will be bedrock.
In relation to anti-realism about consciousness, I think anti-realists often fail to be clear about what they are denying the reality of. I've drawn an analogy to sound (phenomenal experience in general) and music (a coherent, ordered conscious mind in particular). Realists will agree, trivially, that whether we assign a set of sounds (phenomenal experience) the label "music" (conscious) is up to us, and that there is hardly any fact of the matter, yet whether such a thing as sound (phenomenal experience) exists in the first place is undeniable. The defenses I've seen of anti-realism about consciousness are generally very unclear about whether they are denying the reality of "music" or the reality of "sound" altogether. For example:
I’m a consciousness denier, though it’s important to clarify that I’m not denying that I have first-person experiences of the world. I am fully on board with, “I think, therefore I am,” and the notion that you can have 100% confidence in your own first-person experience.
This seems to affirm the reality of phenomenal experience, and suggests that the position defended is just a fairly trivial nominalist view of the concept of consciousness.
Great questions. Let me see whether I can do them justice.
If you could change peoples' minds on one thing, what would it be? I.e. what do you find the most frustrating/pernicious/widespread mistake on this topic?
Three important things come to mind:
1. There seems to be this common misconception that if you hold a suffering-focused view, then you will, or at least you should, endorse forms of violence that seem abhorrent to common sense. For example, you should consider it good when people get killed (because it prevents future suffering for them), and you should try to destroy the world. This doesn't follow. For many reasons.
First, one may hold a pluralist view according to which we have a prima facie obligation to reduce suffering, but also, for example, prima facie obligations not to kill and to respect the autonomy of other individuals. Indeed, academics such as Clark Wolf and Jamie Mayerfeld defend suffering-focused views of this kind. See:
https://web.archive.org/web/20190410204154/https://jwcwolf.public.iastate.edu/Papers/JUPE.HTM https://onlinelibrary.wiley.com/doi/abs/10.1111/j.2041-6962.1996.tb00795.x https://www.amazon.com/Suffering-Moral-Responsibility-Oxford-Ethics/dp/0195115996
Beyond that, even on purely welfarist (suffering-focused) views, there are many strong reasons to consider it bad when individuals die, and to oppose world destruction (see sections 8.1 and 8.2). In fact, the objections commonly raised against suffering-focused views are often more objections against purely welfarist views than they are against the moral asymmetry between happiness and suffering, as you for any welfarist view can construct an argument to the effect that one should be willing to kill for trivial reasons. For example, naively interpreted, a classical utilitarian should also be willing to kill a person, and indeed destroy the world, to prevent the smallest amount of suffering if the "sum" of happiness and suffering is exactly zero otherwise (a point often made by David Pearce). Likewise, a classical utilitarian should endorse what is arguably an even more repugnant world-destruction conclusion than the negative utilitarian: if we could push a button that first unleashes ceaseless torture upon every sentient individual for decades, and then destroys our world to in turn give rise to a "greater" amount of pleasure in some new world, then classical utilitarianism would oblige us to press this button.
But these arguments obviously don't come close to showing that classical utilitarians should endorse violence of this sort in practice; they obviously shouldn't. The same holds true when similar arguments are applied to suffering-focused views.
2. Another belief I would want to challenge is that suffering-focused EAs make the world a more dangerous place from the perspective of other value systems. I would suggest the opposite is the case, and I think what's dangerous is that people don't appreciate this.
Among people who hold suffering-focused views, suffering-focused EAs fall toward the high tail in terms of being cooperative, measured, and prudent. It's a group that does, and to an even greater extent has the potential to, move other suffering-focused people in less naive and more cooperative directions, which is very positive on all value systems. Marginalizing people with suffering-focused views within EA is really not helpful to this end.
3. A third misunderstanding is that people who hold suffering-focused views are much more concerned about mild suffering than, say, the average ethically concerned person. This need not be the case. One can hold suffering-focused views that are primarily concerned with extreme suffering, and which give overriding weight to extreme suffering without giving commensurable weight to mild suffering. I defend such views in chapters 4-5.
'if you were given 10 billion dollars and 10 years to move your field forward, how precisely would you allocate it, and what do you think you could achieve at the end?'
I think I would devote it mostly to research — to building a research field. The field of "effective suffering reduction" is very young and unexplored at this point, and much of the discussion that has taken place so far has been tied to the idiosyncratic and speculative views of a few people (unavoidably so, given that so few people have done research on these issues so far). This means that there is likely a lot of low-hanging fruit here. Building such a research project is in large part the goal of the new organization that I have recently co-founded with Tobias Baumann: Center for Reducing Suffering ( https://centerforreducingsuffering.org/ ).
I think this can give us better insights into which risks we should be most concerned about and more clarity about how we can best reduce them. There's much more to be said here, but I'll let this suffice for now.
Thanks for your comment, George.
Sections 1.4 and 8.5 in my book deal directly with the first issue you raise. Also see chapter 3, "Creating Happiness at the Price of Suffering Is Wrong", for various arguments against a moral symmetry between pleasure and suffering. But many chapters in the first part of the book deal with this.
>Empirically, I think it's pretty clear that most people are willing to trade off pleasure and pain for themselves.
I say a good deal about this in chapter 2. I also discuss the moral relevance of such intrapersonal claims in section 3.2, "Intra- and Interpersonal Claims".
You're welcome! :-)
Whether this is indeed a dissenting view seems unclear. Relative to the question of how space expansion would affect x-risk, it seems that environmentalists (of whom there are many) tend to believe it would increase such risks (though it's of course debatable how much weight to give their views). Some highly incomplete considerations can be found here: https://en.wikipedia.org/wiki/Space_colonization#Objections
The sentiment expressed in the following video by Bill Maher, i.e. that space expansion is a "dangerous idea" at this point, may well be shared by many people on reflection: https://www.youtube.com/watch?v=mrGFEW2Hb2g
One may say similar things in relation to whether it's a dissenting view on space expansion as a cause (even if we hold x-risk constant). For example, space expansion would most likely increase total suffering in expectation — see https://reducing-suffering.org/omelas-and-space-colonization/ — and one (probably unrepresentative) survey found that a significant plurality of people favored "minimizing suffering" as the ideal goal a future civilization should strive for: https://futureoflife.org/superintelligence-survey/.
Interestingly, the same survey also found that the vast majority of people want life to spread into space, which appears inconsistent with the plurality preference for minimizing suffering. An apparent case of (many) people's preferences contradicting themselves, at least in terms of the likely implications of these preferences.