All of antimonyanthony's Comments + Replies

Minimalist axiologies and positive lives

I think it's useful to have a thought experiment to refer to other than Omelas to capture the intuition of "a perfect, arbitrarily large utopia is better than a world with arbitrarily many miserable lives supposedly counterbalanced by sufficiently many good lives." Because:

  • The "arbitrarily many" quantifiers show just how extreme this can get, and indeed the sort of axiology that endorses the VRC is committed to judging the VRC as better the more you multiply the scale, which seems backwards to my intuitions.
  • The first option is a utopia, whereas the Omel
... (read more)
antimonyanthony's Shortform

Why you should consider trying SSRIs

I was initially hesitant to post this, out of some vague fear of stigma and stating the obvious, and not wanting people to pathologize my ethical views based on the fact that I take antidepressants. This is pretty silly for two reasons. First, I think that if my past self had read something like this, he could have been spared years of suffering, and there are probably several readers in his position. EAs are pretty open about mental illness anyway. Second, if anything the fact that I am SFE "despite" currently not bei... (read more)

9Stefan_Schubert21dSlateStarCodex has a long post [https://slatestarcodex.com/2014/07/07/ssris-much-more-than-you-wanted-to-know/] on SSRIs and their side-effects (from 2014); including sexual side-effects. (Here is a 2016 paper [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6007725/#i2168-9709-6-4-191-b05] which also reports on sexual side-effects.) I don't have expertise in this topic, however.
Why fun writing can save lives: the case for it being high impact to make EA writing entertaining

To be clear, I wouldn't use this argument in a space where most people were a much larger inferential gap away from me. I would never try to get somebody excited about EA by telling them about how what they were currently doing was wrong.

However, I thought (and perhaps I was wrong) that EA Forum readers were close enough inferentially to just think it was funny.

For what it's worth, I think this was an entirely reasonable expectation to have, and this is how I read the title of your post. It's provocative without being "clickbaity." So I found the comments objecting to it pretty unrelatable and surprising.

Suffering-Focused Ethics (SFE) FAQ
Carl suggests using units such that when a painful experience and pleasurable experience are said to be of "equal intensity" then you are morally indifferent between the two experiences

I think that's confusing and non-standard. If your definition of intensities is itself a normative judgment, how do you even define classical utilitarianism versus suffering-focused versions? (Edit: after re-reading Carl's post I see he proposes a way to define this in terms of energy. But my impression is still that the way I'm using "intensity," as non-normative, is pretty... (read more)

2Jack R2moI think I meant analogous in the sense that I can then see how statements involving the defined word clearly translate to statements about how to make decisions.
Suffering-Focused Ethics (SFE) FAQ

Basically the same thing other people mean when they use that term in discussions about the ethics of happiness and suffering. I introspect that different valenced experiences have different subjective strengths; without any (moral) value judgments, it seems not very controversial to say the experience of a stubbed toe is less intense than that of depressive episode, and that of a tasty snack is less intense than that of a party with close friends. It seems intuitive to compare the intensities of happy and suffering experiences, at least approximately.

The ... (read more)

3Jack R2moI feel like my question wasn't answered. For instance, Carl suggests using units such that when a painful experience and pleasurable experience are said to be of "equal intensity" then you are morally indifferent between the two experiences. This seems like a super useful way to define the units (the units can then be directly used in decision calculus). Using this kind of definition, you can then try to answer for yourself things like "do I think a day-long headache is more units of pain than a wedding day is units of pleasure?" or "do I think in the technological limit, creating 1 unit of pain will be easier than creating 1 unit of pleasure?" What I meant by my original question was: do you have an alternative definition of what it means for pain/pleasure experiences to be of "equal intensity" that is analogous to this one?
Suffering-Focused Ethics (SFE) FAQ

My response is that my own SFE intuitions don't rely on comparing the worst things people can practically experience with the best things we can practically experience. I see an asymmetry even when comparing roughly equal intensities, difficult though it is to define that, or when the intensity of suffering seems smaller than the happiness. To me it really does seem morally far worse to give someone a headache for a day than to temporarily turn them into a P-zombie on their wedding day. "Far worse" doesn't quite express it - I think the difference is quali... (read more)

2Jack R2moCould you try to expand upon what you mean by “equal intensities”?
Suffering-Focused Ethics (SFE) FAQ
The moral asymmetry is most intuitively compelling when it is interpersonal. Most of us judge that it is wrong to make a person suffer even if it would make another person happy, or trade the intense suffering of a single person for the mild enjoyment of a large crowd, however large the crowd is.

Furthermore, these thought experiments would be much less compelling had they been reversed. It does not seem obviously wrong to reduce a person’s happiness to prevent someone’s suffering. Neither does it seem wrong to prevent intense pleasure for a single person
... (read more)
New Articles on Utilitarianism.net: Population Ethics and Theories of Well-Being

+1, the dismissive tone of the following passage especially left a bad taste in my mouth:

After all, when thinking about what makes some possible universe good, the most obvious answer is that it contains a predominance of awesome, flourishing lives. How could that not be better than a barren rock? Any view that denies this verdict is arguably too nihilistic and divorced from humane values to be worth taking seriously.

It should be pretty clear to someone who has studied alternatives to total symmetric utilitarianism - not all of which are averagist or person-affecting views! - that some of these alternatives are thoroughly motivated by "humane," rather than "nihilistic," intuitions.

What would you do if you had half a million dollars?

This is easy for me to say as someone who agrees with these donation recommendations, but I find it disappointing that this comment apparently has gotten several downvotes. The comment calls attention to a neglected segment of longtermist causes, and briefly discusses what sorts of considerations would lead you to prioritize those causes. Seems like a useful contribution.

What would you do if you had half a million dollars?

Also, considering extinction specifically, Will MacAskill has made the argument that we should avert human extinction based on option value even if we think extinction might be best. Basically even if we avert extinction now, we can in theory go extinct later on if we judge that to be the best option.

Note that this post (written by people who agree that reducing extinction risk is good) provides a critique of the option value argument.

A longtermist critique of “The expected value of extinction risk reduction is positive”

I'm not sure I get why "Singletons about non-life-maximizing values are also convergent", though.

Sorry, I wrote that point lazily because that whole list was supposed to be rather speculative. It should be "Singletons about non-life-maximizing values could also be convergent." I think that if some technologically advanced species doesn't go extinct, the same sorts of forces that allow some human institutions to persist for millennia (religions are the best example, I guess) combined with goal-preserving AIs would make the emergence of a singleton fairly... (read more)

A longtermist critique of “The expected value of extinction risk reduction is positive”

What I mean is closest to #1, except that B has some beings who only experience disvalue and that disvalue is arbitrarily large. Their lives are pure suffering. This is in a sense weaker than the procreation asymmetry, because someone could agree with the PDP but still think it's okay to create beings whose lives have a lot of disvalue as long as their lives also have a greater amount of value. Does that clarify? Maybe I should add rectangle diagrams. :)

A longtermist critique of “The expected value of extinction risk reduction is positive”

That sounds reasonable to me, and I'm also surprised I haven't seen that argument elsewhere. The most plausible counterarguments off the top of my head are: 1) Maybe evolution just can't produce beings with that strong of a proximal objective of life-maximization, so the emergence of values that aren't proximally about life-maximization (as with humans) is convergent. 2) Singletons about non-life-maximizing values are also convergent, perhaps because intelligence produces optimization power so it's easier for such values to gain sway even though they aren'... (read more)

4Jim Buhler5moI completely agree with 3 and it's indeed worth clarifying. Even ignoring this, the possibility of humans being more compassionate than pro-life grabby aliens might actually be an argument against human-driven space colonization, since compassion -- especially when combined with scope sensitivity -- seems to increase agential s-risks [https://centerforreducingsuffering.org/research/a-typology-of-s-risks/#Agential_s-risks] related to potential catastrophic cooperation failure between AIs (see e.g., Baumann and Harris 2021, 46:24 [https://www.sentienceinstitute.org/podcast/episode-16.html]), which are the most worrying s-risks according to Jesse Clifton's preface of CLR's agenda [https://www.alignmentforum.org/posts/DbuCdEbkh4wL5cjJ5/preface-to-clr-s-research-agenda-on-cooperation-conflict-and] . A space filled with life-maximizing aliens who don't give a crap about welfare and suffering might be better than one filled with compassionate humans who create AGIs that might do the exact opposite of what they want (because of escalating conflicts, strategic threats, …). Obviously, uncertainty stays huge here. Besides, 1 and 2 seem to be good counter-considerations, thanks! :) I'm not sure I get why "Singletons about non-life-maximizing values are also convergent", though. Do you -- or anyone else reading this -- can point at any reference that would help me understand this?
People working on x-risks: what emotionally motivates you?

all the work done by other EAs in other causes would be for naught if we end up becoming extinct

I've seen this argument elsewhere, and still don't find it convincing. "All" seems hyperbolic. Much longtermist work to improve the quality of posthumans' lives does become irrelevant if there won't be any posthumans. But animal welfare, poverty reduction, mental health, and probably some other causes I'm forgetting will still have made an important (if admittedly smaller-scale) difference by relieving their beneficiaries' suffering.

Shallow evaluations of longtermist organizations

I think I learned a lot while I was there, and I think the other summer research fellows whose views I have a sense of felt the same

+1. I'd say that applying for and participating in their fellowship was probably the best career decision I've made so far. Maybe 60-70% of this was due to the benefits of entering a network of people whose altruistic efforts I greatly respect, the rest was the direct value of the fellowship itself. (I haven't thought a lot about this point, but on a gut level it seems like the right breakdown.)

8Ozzie Gooen5moThanks for both comments here. Personal anecdotes are really valuable, and I assume would be useful to later people trying to get some idea of the value from CLR. Sadly, I imagine there's a significant bias for positive comments (I assume that people with negative experiences would be cautious of offending anyone), but positive comments still have signal.
Exploring a Logarithmic Tolerance of Suffering

Personally I still wouldn't consider it ethically acceptable to, say, create a being experiencing a -100-intensity torturous life provided that a life  with exp(100)-intensity happiness is also created. Even after trying strongly to account for possible scope neglect. Going from linear to log here doesn't seem to address the fundamental asymmetry. But I appreciate this post, and I suspect quite a few longtermists who don't find stronger suffering-focused views compelling would be sympathetic to a view like this one - and the implications for prioritizing s-risks versus extinction risks seem significant.

Spears & Budolfson, 'Repugnant conclusions'

But of course the A and Z populations are already impossible, because we already have present and past lives that aren't perfectly equal and aren't all worth living.  So-- even setting aside possible boundedness on the number of lives--the RC has always fundamentally been about comparing undeniably impossible  populations

I don't find this a compelling response to Guillaume's objection. There seems to be a philosophically relevant difference between physical impossibility of the populations, and metaphysical impossibility of the axiologi... (read more)

How to PhD

You will find yourself justifying the stupidest shit on impact grounds, and/or pursuing projects which directly make the world worse.

Could you be a bit more specific about this point? This sounds very field-dependent.

2eca8moI bet it is! The example categories I think I had in mind at time of writing would be 1) people in ML academia who want to be doing safety instead doing work that almost entirely accelerates capabilities and 2) people who want to work on reducing biological risk instead publish on tech which is highly dual use or broadly accelerates biotechnology without deferentially accelerating safety technology. I know this happens because I've done it. My most successful publication to date (https://www.nature.com/articles/s41592-019-0598-1 [https://www.nature.com/articles/s41592-019-0598-1]) is pretty much entirely capabilities accelerating. I'm still not sure if it was the right call to do this project, but if it is, it will have been a narrow edge revolving on me using the cred I got from this to do something really good later on.
Proposed Longtermist Flag

I downvoted for reasons similar to Stefan's comment: longtermism is not synonymous with a focus on x-risk and space colonization, and the black bar symbolism creates that association. In EA discourse, I have observed consistent conflation of longtermism with this particular subset of longtermist priorities, and I'd like to strongly push back against that. (I believe I would feel the same even if my priorities aligned with that subset.)

On future people, looking back at 21st century longtermism

But we should care about individual orangutans, & it seems plausible to me that they care whether they go extinct. Large parts of their lives are after all centered around finding mates & producing offspring. So to the extent that anything is important to them (& I would argue that things can be just as important to them as they can be to us), surely the continuation of their species/bloodline is.

I'm pretty skeptical of this claim. It's not evolutionarily surprising that orangutans (or humans!) would do stuff that decreases their probability of... (read more)

Against neutrality about creating happy lives

For NU (including lexical threshold NU), this can mean adding an arbitrarily huge number of new people to hell to barely reduce the suffering for each person in a sufficiently large population already in hell. (And also not getting the very positive lives, but NU treats them as 0 welfare anyway.)

This may be counterintuitive to an extent, but to me it doesn't reach "very repugnant" territory. Misery is still reduced here; an epsilon change of the "reducing extreme suffering" sort, evenly if barely so, doesn't seem morally frivolous like the creation of an e... (read more)

5Teo Ajantaival20dWhat would it mean to repeat this step (up to an infinite number of times)? Intuitively, it sounds to me like the suffering gets divided more equally between those who already exist and those who do not, which ultimately leads to an infinite population where everyone has a subjectively perfect experience. In the finite case, it leads to an extremely large population of almost perfectly untroubled lives. If extrapolated in this way, it seems quite plausible that the population we eventually get by repeating this step is much better than the initial population.
2MichaelStJules9moI wrote some more about this here [https://forum.effectivealtruism.org/posts/HGLK3igGprWQPHfAp/against-neutrality-about-creating-happy-lives?commentId=qT48K2fTfdQCsP8mX] in reply to Jack.
Against neutrality about creating happy lives

I am also interested by the claim in this paper that the repugnant conclusion afflicts all population axiologies, including person-affecting views

Not negative utilitarian axiology. The proof relies on the assumption that the utility variable u can be positive.

What if "utility" is meant to refer to the objective aspects of the beings' experience etc. that axiologies would judge as good or bad—rather than to moral goodness or badness themselves? Then I think there are two problems:

  • 1) Supposing it's a fair move to aggregate all these aspects into one scalar,
... (read more)
2MichaelStJules9moPlenty of theories avoid the RC and VRC, but this paper extends the VRC on p. 19. Basically, you can make up for the addition of an arbitrary number of arbitrarily bad lives instead of an arbitrary number of arbitrarily good lives with arbitrarily small changes to welfare to a base population, which depends on the previous factors. For NU (including lexical threshold NU), this can mean adding an arbitrarily huge number of new people to hell to barely reduce the suffering for each person in a sufficiently large population already in hell. (And also not getting the very positive lives, but NU treats them as 0 welfare anyway.) Also, related to your edit, epsilon changes could flip a huge number of good or neutral lives in a base population to marginally bad lives.
Against neutrality about creating happy lives

I guess it was unclear that here I was assuming that the creator knows with certainty all the evaluative contents of the life they're creating. (As in the Wilbur and Michael thought experiments.) I would be surprised if anyone disagreed that creating a life you know won't be worth living, assuming no other effects, is wrong. But I'd agree that the claim about lives not worth living in expectation isn't uncontroversial, though I endorse it.

[edit: Denise beat me to the punch :)]

Against neutrality about creating happy lives

[Apologies for length, but I think these points are worth sharing in full.]

As someone who is highly sympathetic to the procreation asymmetry, I have to say, I still found this post quite moving. I’ve had, and continue to have, joys profound enough to know the sense of awe you’re gesturing at. If there were no costs, I’d want those joys to be shared by new beings too.

Unfortunately, assuming that we’re talking about practically relevant cases where creating a "happy" life also entails suffering of the created person and other beings, there are costs in expec... (read more)

7Larks9moThis seems possibly true to me, but not obviously the case, and definitely not uncontroversial. I would guess many people who lived unfortunate lives would nonetheless disagree that their parents inflicted a moral wrong upon them by conceiving them. Similarly, I don't think I have ever heard anyone suggest that children who suffer at the hands of abusers or terrorists were first wronged, not by their tormentor, but by their parents. Even in bleak circumstances, so long as the parents didn't intend to make things bad for the children, I think most people would refrain from such a judgement.

I found it surprising that you wrote: …

Because to me this is exactly the heart of the asymmetry. It’s uncontroversial that creating a person with a bad life inflicts on them a serious moral wrong. Those of us who endorse the asymmetry don’t see such a moral wrong involved in not creating a happy life.

+1. I think many who have asymmetric sympathies might say that there is a strong aesthetic pull to bringing about a life like Michael’s, but that there is an overriding moral responsibility not to create intense suffering.

Layman’s Summary of Resolving Pascallian Decision Problems with Stochastic Dominance

While I think this is a fascinating concept, and probably pretty useful as a heuristic in the real hugely uncertain world, I don't think it addresses the root of the decision theoretic puzzles here. I - and I suspect most people? - want decision theory to give an ordering over options even assuming no background uncertainty, which SD can't provide on its own. If option A is 100% chance of -10 utility, and option B is 50% chance of -10^20 utility else 0, it seems obvious to me that B is a very very terrible, not rationally permitted choice. But in a world with no background uncertainty A would not stochastically dominate B.

antimonyanthony's Shortform

Wow, that's promising news! Thanks for sharing.

Bob Jacobs's Shortform

What if there's a small hedonic cost to creating the beautiful world? Suppose option 1 is "Creating a stunningly beautiful world that is uninhabited and won’t influence sentient beings in any way, plus giving a random person a headache for an hour."

In that case I can't really see a moral case for choosing option 1, no matter how stunningly beautiful the world in question is. This would suggest that even if there is some intrinsic value to beauty, it's extremely small if not lexically inferior to the value of hedonics. I think for basically all practical purposes we do face tradeoffs between hedonic and other purported values, and I just don't feel the moral force of the latter in those cases.

antimonyanthony's Shortform

Some reasons not to primarily argue for veganism on health/climate change grounds

I've often heard animal advocates claim that since non-vegans are generally more receptive to arguments from health benefits and reducing climate impact, we should prioritize those arguments, in order to reduce farmed animal suffering most effectively.

On its face, this is pretty reasonable, and I personally don't care intrinsically about how virtuous people's motivations for going vegan are. Suffering is suffering, no matter its sociological cause.

But there are some reasons I'... (read more)

3Dan Hageman10moQuick comment. With respect to your first point, this has always struck me as one of the better points as to why non ethical arguments should primarily avoided when it comes to making the case for veganism. However, after reading Tobias Leenaert's 'How to Create a Vegan World: A Pragmatic Approach', I've become a bit more agnostic on this notion. He notes a few studies from The Humane League that show that red-meat reducers/avoiders tend to eat less chicken than your standard omnivore. He also referenced a few studies from Nick Cooney's book, Veganomics, which covers some of this on p. 107-111. Combined with the overall impact non-ethical vegans could have on supply/demand for other vegan products (and their improvement in quality), I've been a bit less worried about this reason. I think your other reasons are all extremely important and underrated, though, so still lean overall that the ethical argument should be relied on when possible :)
[Podcast] Ajeya Cotra on worldview diversification and how big the future could be

Thank you for writing this critique, it was a thought I had while listening as well. In my experience many EAs make the same mistake, not just Ajeya.

antimonyanthony's Shortform

Linkpost: "Tranquilism Respects Individual Desires"

I wrote a defense of an axiology on which an experience is perfectly good to the extent that it is absent of craving for change. This defense follows in part from a reductionist view of personal identity, which is usually considered in EA circles to be in support of total symmetric utilitarianism, but I argue that this view lends support to a form of negative utilitarianism.

Scope-sensitive ethics: capturing the core intuition motivating utilitarianism

The problem is that one man's modus ponens is another man's modus tollens.

Fair :) I admit I'm apparently unusually inclined to the modus ponens end of these dilemmas.

If there's a part of a theory that is of very little practical use, but is still seen as a strong point against the theory, we should try find a version without it.

I think this depends on whether the version without it is internally consistent. But more to the point, the question about the value of strangers does seem practically relevant. It influences how much you're willing to effectively d... (read more)

Scope-sensitive ethics: capturing the core intuition motivating utilitarianism

In particular, it seems hard to make utilitarianism consistent with caring much more about people close to us than strangers.

Why exactly is this a problem? To me it seems more sensible to recognize our disproportionate partiality toward people close to us as an evolutionary bug, rather than a feature. Even though we do  care about people close to us much more, this doesn't mean we actually should regard their interests as overwhelmingly more important than those of strangers (whom we can probably help more cheaply), on critical reflection.

The problem is that one man's modus ponens is another man's modus tollens. Lots of people take the fact that utilitarianism says that you shouldn't care about your family more than a stranger as a rebuttal to utilitarianism.

Now, we could try to persuade them otherwise, but what's the point? Even amongst utilitarians, almost nobody gets anywhere near placing as much moral value on a spouse as a stranger. If there's a part of a theory that is of very little practical use, but is still seen as a strong point against the theory, we should try find a version wi... (read more)

jackmalde's Shortform

Might be outdated, and the selection of papers is probably skewed in favor of welfare reforms, but here's a bibliography on this question.

1jackmalde1yThanks for that
Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

There are some moral intuitions, such as the ‘procreation asymmetry’ (illustrated in the ‘central illustration’ below) that only a person-affecting view can capture.

I don't think this is exactly true. The procreation asymmetry is also consistent with any form of negative consequentialism. I wouldn't classify such views as "person-affecting," since the reason they don't consider it obligatory to create happy people is that they reject the premise that happiness is intrinsically morally valuable, rather than that they assign special importance to badness-for... (read more)

Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure

This disanalogy between the x-risk and s-risk definitions is a source of ongoing frustration to me, as s-risk discourse thus often conflates hellish futures (which are existential risks, and especially bad ones), or possibilities of suffering on a scale significant relative to the potential for suffering (or what we might expect), with bad events many orders of magnitude smaller or futures that are utopian by common sense standards and compared to our world or the downside potential.

This is a fair enough critique. But I think that from the perspective o... (read more)

Introduction to the Philosophy of Well-Being

A fourth alternative that may be appealing to those who don't find any of these three theories completely satisfying: tranquilism.

Tranquilism states that an individual experiential moment is as good as it can be for her if and only if she has no craving for change.

(You could argue this is a subset of hedonism, in that it is fundamentally concerned with experiences, but there are important differences.)

some concerns with classical utilitarianism

Accepting VRC would be required by CU, in this hypothetical. So, assuming CU, rejecting VRC would need justification.

Yep, this is what I was getting at, sorry that I wasn't clear. I meant "defense of CU against this case."

On the other hand, as Vinding also writes (ibid, 5.6; 8.10), the qualitative difference between extreme suffering and suffering that could be extreme if we push a bit further may be still be huge.

Yeah, I don't object to the possibility of this in principle, just noting that it's not without its counterintuitive consequences. Neither is pure NU, or any sensible moral theory in my opinion.

some concerns with classical utilitarianism

Good point. I would say I meant intensity of the experience, which is distinct both from intensity of the stimulus and moral (dis)value. And I also dislike seeing conflation of intensity with moral value when it comes to evaluating happiness relative to suffering.

some concerns with classical utilitarianism

I agree with the critiques in the sections including and after "Implicit Commensurability of (Extreme) Suffering," and would encourage defenders of CU to apply as much scrutiny to its counterintuitive conclusions as they do to NU, among other alternatives. I'd also add the Very Repugnant Conclusion as a case for which I haven't heard a satisfying CU defense. Edit: The utility monster as well seems asymmetric in how repugnant it is when you formulate it in terms of happiness versus suffering. It does seem abhorrent to accept the increased suffering of many ... (read more)

4JesseClifton1ynil already kind of addressed this in their reply, but it seems important to keep in mind the distinction between the intensity of a stimulus and the moral value of the experience caused by the stimulus. Statements like “experiencing pain just slightly stronger than that threshold” risk conflating the two. And, indeed, if by “pain” you mean “moral disvalue” then to discuss pain as a scalar quantity begs the question against lexical views. Sorry if this is pedantic, but in my experience this conflation often muddles discussions about lexical views.
2nil1yA defense of accepting or rejecting the Very Repugnant Conclusion (VRC) [for those who don't know, here's a full text [https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.227.6699&rep=rep1&type=pdf] (PDF) which defines both Conclusions in the introduction]? Accepting VRC would be required by CU, in this hypothetical. So, assuming CU, rejecting VRC would need justification. Perhaps so. On the other hand, as Vinding also writes (ibid, 5.6; 8.10), the qualitative difference between extreme suffering and suffering that could be extreme if we push a bit further may be still be huge. So, "slightly weaker" would not apply to the severity of suffering. Also, irrespective of whether the above point is true, one (as Taurek did as I mention in the text) argue that (a) is still less bad than (b), for no one in (a) suffers a much as the one in (b). Here we might at least agree that some forms of aggregating are more plausible than others, at least in practice: e.g. intrapersonal vs interpersonal aggregating. Vinding too brings up such a disutility monster in Suffering-Focused Ethics: Defense and Implications [https://magnusvinding.com/2020/05/31/suffering-focused-ethics-defense-and-implications/] , 3.1, BTW:
Please Take the 2020 EA Survey

unless I think that I'm at least as well informed than the average respondent about where this money should go

This applies if your ethics are very aligned with the average respondent, but if not, it is a decent incentive. I'd be surprised if almost all of EAs' disagreement on cause prioritization were strictly empirical.

5Adam Binks1yI think my ethics are less considered than the average EA community member, so I think I'd rather defer the decision to them. Doesn't seem especially motivating for me personally.
antimonyanthony's Shortform

5. I do not expect that artificial superintelligence would converge on The Moral Truth by default. Even if it did, the convergence might be too slow to prevent catastrophes. But I also doubt humans will converge on this either. Both humans and AIs are limited by our access only to our "own" qualia, and indeed our own present qualia. The kind of "moral realism" I find plausible with respect to this convergence question is that convergence to moral truth could occur for a perfectly rational and fully informed agent, with unlimited computa... (read more)

antimonyanthony's Shortform

Some vaguely clustered opinions on metaethics/metanormativity

I'm finding myself slightly more sympathetic to moral antirealism lately, but still afford most of my credence to a form of realism that would not be labeled "strong" or "robust." There are several complicated propositions I find plausible that are in tension:

1. I have a strong aversion to arbitrary or ad hoc elements in ethics. Practically this cashes out as things like: (1) rejecting any solutions to population ethics that violate transitivity, and (2) being fairly unpe... (read more)

0antimonyanthony1y5. I do not expect that artificial superintelligence would converge on The Moral Truth by default. Even if it did, the convergence might be too slow to prevent catastrophes. But I also doubt humans will converge on this either. Both humans and AIs are limited by our access only to our "own" qualia, and indeed our own present qualia. The kind of "moral realism" I find plausible with respect to this convergence question is that convergence to moral truth could occur for a perfectly rational and fully informed agent, with unlimited computation and - most importantly - subjective access to the hypothetical future experiences of all sentient beings. These conditions are so idealized that I am probably as pessimistic about AI as any antirealist, but I'm not sure yet if they're so idealized that I functionally am an antirealist in this sense.
Expected value theory is fanatical, but that's a good thing
we shouldn't generally assign probability 0 to anything that's logically possible (except where a measure is continuous; I think this requirement had a name, but I forget)

You're probably (pun not intended) thinking of Cromwell's rule.

2MichaelStJules1yYes, thanks!
How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs

Thanks for your reply! :)

I think that in practice no one does A.

This is true, but we could all be mistaken. This doesn't seem unlikely to me, considering that our brains simply were not built to handle such incredibly small probabilities and incredibly large magnitudes of disutility. That said, I won't practically bite the bullet, any more than people who would choose torture over dust specks probably do, or any more than pure impartial consequentialists truly sacrifice all their own frivolities for altruism. (This latter case is often excused as... (read more)

antimonyanthony's Shortform

I don't call the happiness itself "slight," I call it "slightly more" than the suffering (edit: and also just slightly more than the happiness per person in world A). I acknowledge the happiness is tremendous. But it comes along with just barely less tremendous suffering. If that's not morally compelling to you, fine, but really the point is that there appears (to me at least) to be quite a strong moral distinction between 1,000,001 happiness minus 1,000,000 suffering, and 1 happiness.

antimonyanthony's Shortform

The Repugnant Conclusion is worse than I thought

At the risk of belaboring the obvious to anyone who has considered this point before: The RC glosses over the exact content of happiness and suffering that are summed up to the quantities of “welfare” defining world A and world Z. In world A, each life with welfare 1,000,000 could, on one extreme, consist purely of (a) good experiences that sum in intensity to a level 1,000,000, or on the other, (b) good experiences summing to 1,000,000,000 minus bad experiences summing (in absolute value) to 99... (read more)

1MichaelDickens1yIt seems to me that you're kind of rigging this thought experiment when you define an amount of happiness that's greater than an amount of suffering, but you describe the happiness as "slight" and the suffering as "tremendous", even though the former is larger than the latter.
How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs
At the time I thought I was explaining [Pascal's mugging] badly but reading more on this topic I think it is just a non-problem: it only appears to be a problem to those whose only decision making tool is an expected value calculation.

This is quite a strong claim IMO. Could you explain exactly which other decision making tool(s) you would apply to Pascal's mugging that makes it not a problem? The descriptions of the tools in stories 1 and 2 are too vague for me to clearly see how they'd apply here.

Indeed, if anything, some of those tools st... (read more)

0weeatquince1yThis is a fascinating question – thank you. Let us think through the range of options for addressing Pascal's mugging. There are basically 3 options: * A: Bite the bullet – if anyone threatens to do cause infinite suffering then do whatever they say. * B: Try to fix your expected value calculations to remove your problem. * C: Take an alternative approach to decision making that does not rely on expected value. It is also possible that all of A and B and C fail for different reasons.* Let's run through. A: I think that in practice no one does A. If I email everyone in the EA/longtermism community and say: I am an evil wizard please give me $100 or I will cause infinite suffering! I doubt I will get any takers. B: You made three suggestions for addressing Pascal's mugging. I think I would characterise suggestions 1 and 2 as ways of adjusting your expected value calculations to aim for more accurate expected value estimates (not as using an alternate decision making tool). I think it would be very difficult to make this work, as it leads to problems such as the ones you highlight. You could maybe make this work using a high discounting based on the "optimisers curse" type factors to reduce the expected value of high-uncertainty high-value decisions. I am not sure. (The GPI paper on cluelessness [https://globalprioritiesinstitute.org/wp-content/uploads/David-Thorstad-Andreas-Mogensen-Heuristics-for-clueless-agents.pdf] basically says that expected value calculations can never work to solve this problem. It is plausible you could write a similar paper about Pascals mugging. It might be interesting to read the GPI paper and mentally replace "problem of clulessness" with "problem of pascals mugging" and see how it reads). C: I do think you could make your third option, the common sense version, work. You just say: if I follow this decision it will lead to very perverse circumstances, such as me having to give everything I own to anyone who cla
AMA: Tobias Baumann, Center for Reducing Suffering

Do you think this is highly implausible even if you account for:

  • the opportunities to reduce other people's extreme suffering that a person committing suicide would forego,
  • the extreme suffering of one's loved ones this would probably increase,
  • plausible views of personal identity on which risking the extreme suffering of one's future self is ethically similar to, if not the same as, risking it for someone else,
  • relatedly, views of probability where the small measure of worlds with a being experiencing extreme suffering are as "real" a
... (read more)

These seem like good objections to me, but overall I still find it pretty implausible. A hermit who leads a happy life alone on an island (and has read lots of books about personal identity and otherwise acquired a lot of wisdom) probably wouldn't want to commit suicide unless the amount of expected suffering in their future was pretty significant.

(I didn't understand, or disagree with, the fourth point.)

Load More