antimonyanthony

I'm Anthony DiGiovanni, a suffering-focused AI safety researcher. I (occasionally) write about altruism-relevant topics on my blog, Ataraxia.

Wiki Contributions

Comments

Suffering-Focused Ethics (SFE) FAQ
Carl suggests using units such that when a painful experience and pleasurable experience are said to be of "equal intensity" then you are morally indifferent between the two experiences

I think that's confusing and non-standard. If your definition of intensities is itself a normative judgment, how do you even define classical utilitarianism versus suffering-focused versions? (Edit: after re-reading Carl's post I see he proposes a way to define this in terms of energy. But my impression is still that the way I'm using "intensity," as non-normative, is pretty common and useful.)

What I meant by my original question was: do you have an alternative definition of what it means for pain/pleasure experiences to be of "equal intensity" that is analogous to this one?

Analogous in what way? The point of my alternative definition is to provide a non-normative currency so that we can meaningfully ask what the normative ratios are (what David Althaus calls N-ratios here). So I guess I just reject the premise that an analogous definition would be useful.

ETA: If it helps to interpret my original response, I think you can substitute (up to some unit conversion) energy for intensity. In other words, my SFE intuitions aren't derived from a comparison of suffering experiences that require a lot of energy with happy experiences that don't require much energy. I see an asymmetry even when the experiences seem to be energetically equivalent. I don't know enough neuroscience to say if my intuitions about energetic equivalence are accurate, but it seems to beg the question against SFE to assume that even the highest-energy happy experiences that humans currently experience involve less energy than a headache. (Not saying you're necessarily assuming that, but I don't see how Carl's argument would go through without something like that.)

Suffering-Focused Ethics (SFE) FAQ

Basically the same thing other people mean when they use that term in discussions about the ethics of happiness and suffering. I introspect that different valenced experiences have different subjective strengths; without any (moral) value judgments, it seems not very controversial to say the experience of a stubbed toe is less intense than that of depressive episode, and that of a tasty snack is less intense than that of a party with close friends. It seems intuitive to compare the intensities of happy and suffering experiences, at least approximately.

The details of these comparisons are controversial, to be sure. But I don't think it's a confused concept, and if we didn't have a notion of equal intensities, non-SFE views wouldn't have recourse to the criticism that SFE involves a strange asymmetry.

Suffering-Focused Ethics (SFE) FAQ

My response is that my own SFE intuitions don't rely on comparing the worst things people can practically experience with the best things we can practically experience. I see an asymmetry even when comparing roughly equal intensities, difficult though it is to define that, or when the intensity of suffering seems smaller than the happiness. To me it really does seem morally far worse to give someone a headache for a day than to temporarily turn them into a P-zombie on their wedding day. "Far worse" doesn't quite express it - I think the difference is qualitative, i.e. it doesn't appear to be a problem for a person not to be experiencing more intensely happy versions of suffering-free states.

I think Shulman's argument does give a prima facie cause for suspicion of suffering-focused intuitions, and it's a reason for some optimism about the empirical distribution of happiness and suffering. (Whether that's really comforting depends on your thoughts on complexity of value.) But it's not overwhelming as a normative argument, and I think the "asymmetry is a priori weird" argument only works against forms of "weak NU" (all suffering is commensurate with happiness, just not at a 1:1 ratio).

Suffering-Focused Ethics (SFE) FAQ
The moral asymmetry is most intuitively compelling when it is interpersonal. Most of us judge that it is wrong to make a person suffer even if it would make another person happy, or trade the intense suffering of a single person for the mild enjoyment of a large crowd, however large the crowd is.

Furthermore, these thought experiments would be much less compelling had they been reversed. It does not seem obviously wrong to reduce a person’s happiness to prevent someone’s suffering. Neither does it seem wrong to prevent intense pleasure for a single person in order to stop a large number of people’s mild suffering. This suggests that the intuitive force behind these thought experiments is driven by an asymmetry between suffering and happiness, rather than a moral prohibition against instrumentalization.

These are important points that I think often get missed in discussions of SFE - thanks for including them!

You mention the Repugnant Conclusion (I'd prefer to call it the Mere Addition Paradox for neutrality, though I'm guilty of not always doing this) as something that SFE escapes. I think this depends on the formulation, though in my estimation the form of RC that SFE endorses is really not so problematic, as many non-SFE longtermists seem to agree. The Very Repugnant Conclusion (also not the most neutral name :)) also strikes me as far worse and worth more attention in population ethics discourse, much as SFE has its own counterintuitive implications that make me put some weight on other views.

Including Arrhenius and Bykvist as examples of supporting negative utilitarianism might be a bit misleading. In Knutsson's sources, they do claim to put more weight on suffering than happiness, but I think that when most people use the term "negative utilitarianism" they mean more than this, something like the set of views that hold at least some forms of suffering cannot be morally outweighed by any happiness / other purported goods. In the context of at least Arrhenius's other writings (I'm less familiar with Bykvist), as I understand them, he doesn't fall in that group. Though, Arrhenius did propose the VRC as an important population ethics problem, and that seems to afflict most non-NU views.

New Articles on Utilitarianism.net: Population Ethics and Theories of Well-Being

+1, the dismissive tone of the following passage especially left a bad taste in my mouth:

After all, when thinking about what makes some possible universe good, the most obvious answer is that it contains a predominance of awesome, flourishing lives. How could that not be better than a barren rock? Any view that denies this verdict is arguably too nihilistic and divorced from humane values to be worth taking seriously.

It should be pretty clear to someone who has studied alternatives to total symmetric utilitarianism - not all of which are averagist or person-affecting views! - that some of these alternatives are thoroughly motivated by "humane," rather than "nihilistic," intuitions.

What would you do if you had half a million dollars?

This is easy for me to say as someone who agrees with these donation recommendations, but I find it disappointing that this comment apparently has gotten several downvotes. The comment calls attention to a neglected segment of longtermist causes, and briefly discusses what sorts of considerations would lead you to prioritize those causes. Seems like a useful contribution.

What would you do if you had half a million dollars?

Also, considering extinction specifically, Will MacAskill has made the argument that we should avert human extinction based on option value even if we think extinction might be best. Basically even if we avert extinction now, we can in theory go extinct later on if we judge that to be the best option.

Note that this post (written by people who agree that reducing extinction risk is good) provides a critique of the option value argument.

A longtermist critique of “The expected value of extinction risk reduction is positive”

I'm not sure I get why "Singletons about non-life-maximizing values are also convergent", though.

Sorry, I wrote that point lazily because that whole list was supposed to be rather speculative. It should be "Singletons about non-life-maximizing values could also be convergent." I think that if some technologically advanced species doesn't go extinct, the same sorts of forces that allow some human institutions to persist for millennia (religions are the best example, I guess) combined with goal-preserving AIs would make the emergence of a singleton fairly likely - not very confident in this, though, and I think #2 is the weakest argument. Bostrom's "The Future of Human Evolution" touches on similar points.

A longtermist critique of “The expected value of extinction risk reduction is positive”

What I mean is closest to #1, except that B has some beings who only experience disvalue and that disvalue is arbitrarily large. Their lives are pure suffering. This is in a sense weaker than the procreation asymmetry, because someone could agree with the PDP but still think it's okay to create beings whose lives have a lot of disvalue as long as their lives also have a greater amount of value. Does that clarify? Maybe I should add rectangle diagrams. :)

Load More