AidanGoth

Wiki Contributions

Comments

Weighted Pros / Cons as a Norm

Sorry for the slow reply. I don't have a link to any examples I'm afraid but I just mean something like this:

Prior that we should put weights on arguments and considerations: 60%

Pros:

  • Clarifies the writer's perspective each of the considerations (65%)
  • Allows for better discussion for reasons x, y, z... (75%)

Cons:

  • Takes extra time (70%)

This is just an example I wrote down quickly, not actual views. But the idea is to state explicit probabilities so that we can see how they change with each consideration.

To see you can find the Bayes' factors, note that if  is our prior probability that we should give weights,  is our prior that we shouldn't, and  and  are the posteriors after argument 1, then the Bayes' factor is 

Similarly, the Bayes' factor for the second pro is .

Weighted Pros / Cons as a Norm

Good questions! It's a shame I don't have good answers. I remember finding Spencer Greenberg's framing helpful too but I'm not familiar with other useful practical framings, I'm afraid.

I suggested the Bayes' factor because it seems like a natural choice of the strength/weight of an argument but I don't find it super easy to reason about usually.

The final suggestion I made will often be easier to do intuitively. You can just to state your prior at the start and then intuitively update it after each argument/consideration, without any maths. I think this is something that you get a bit of a feel for with practice. I would guess that this would usually be better than trying to formally apply Bayes' rule. (You could then work out your Bayes' factor as it's just a function of your prior and posterior but that doesn't seem especially useful at this point/it seems like too much effort for informal discussions.)

Weighted Pros / Cons as a Norm

Nice post! I like the general idea and agree that a norm like this could aid discussions and clarify reasoning. I have some thoughts that I hope can build on this.

I worry that the (1-5) scale might be too simple or misleading in many cases though and it doesn't quite give us the most useful information. My first concern is that this looks like a cardinal scale (especially the way you calculate the output) but is it really the case that you should weigh arguments with score 2 twice as much as arguments with score 1 etc.? Some arguments might be much more than 5x more important than others, but that can't be captured on the (1-5) scale.

Maybe this would work better as an ordinal ranking with 5 degrees of importance (the initial description sounds more like this). In the example, this would be sufficient to establish that the pros have more weight, but it wouldn't always be conclusive (e.g. 5, 1 on the pro side and 4, 3 on the con side).

I think a natural cardinal alternative would be to give the Bayes' factor for each alternative, and ideally give a prior probability at the start. Or similarly, give a prior and then update this after each argument/consideration, so you and the reader can see how much each argument/consideration affects your beliefs. I've seen this used before and found it helpful. And this seems to convey more important information than how important an argument/consideration is: how much we update our beliefs in response to arguments/considerations.

Mundane trouble with EV / utility

I think NunoSempere's answer is good and looking vNM utility should give you a clearer idea of where people are coming from in these discussions. I would also recommend the Stanford Encyclopedia of Philosophy's article on expected utility theory. https://plato.stanford.edu/entries/rationality-normative-utility/

You make an important and often overlooked point about the Long-Run Arguments for expected utility theory (described in the article above). You might find Christian Tarsney's paper, Exceeding Expectations, interesting and relevant. https://globalprioritiesinstitute.org/christian-tarsney-exceeding-expectations-stochastic-dominance-as-a-general-decision-theory/

On 3, this is a super hard practical difficulty that doesn't have a satisfactory answer in many cases. Very relevant is Hilary Greaves' Cluelessness. https://issuu.com/aristotelian.society/docs/greaves

As NunoSempere suggests, GiveWell is a good place to look for some tricky comparisons. My colleague, Stephen Clare, and I made this (very primitive!) attempt to compare saving human lives with sparing chickens from factory farms, which you might find interesting. https://forum.effectivealtruism.org/posts/ahr8k42ZMTvTmTdwm/how-good-is-the-humane-league-compared-to-the-against

Wholehearted choices and "morality as taxes"

I found this really motivating and inspiring. Thanks for writing. I've always found the "great opportunity" framing of altruism stretched and not very compelling but I find this subtle reframing really powerful. I think the difference for me is the emphasis on the suffering of the drowning man and his family, whereas "great opportunity" framings typically emphasise how great it would be for YOU to be a hero and do something great. I prefer the appeal to compassion over ego.

I usually think more along Singerian obligation lines and this has led to unhealthy "morality as taxes" thought patterns. On reflection, I realise that I haven't always thought about altruism in this way and I actually used to think about it in a much more wholehearted way. Somehow, I largely lost that wholehearted thinking. This post has reminded me why I originally cared about altruism and morality and helped me revert to wholehearted thinking, which feels very uplifting and freeing. I plan on revisiting this whenever I notice myself slipping back into "morality as taxes" thought patterns.

Wholehearted choices and "morality as taxes"

My reading of the post is quite different: This isn't an argument that, morally, you ought to save the drowning man. The distant commotion thought experiment is designed to help you notice that it would be great if you had saved him and to make you genuinely want to have saved him. Applying this to real life, we can make sacrifices to help others because we genuinely/wholeheartedly want to, not just because morality demands it of us. Maybe morality does demand it of us but that doesn't matter because we want to do it anyway.

Comments on “Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty”

Agreed. I didn't mean to imply that totalism is the only view sensitive to the mortality-fertility relationship - just that the results could be fairly different on totalism and that it's especially important to see the results on totalism and that it makes sense to look at totalism before other population ethical views not yet considered. Exploring other population ethical views would be good too!


If parents are trying to have a set number of children (survive to adulthood) then the effects of reducing mortality might not change the total number of future people much, because parents adjust fertility

I think my concern here was that the post suggested that saving lives might not be very valuable on totalism due to a high fertility adjustment:

A report writtenfor GiveWell estimated that in some areas where it recommends charities the number of births averted per life saved is as large as 1:1, a ratio at which population size and growth are left effectively unchanged by saving lives.[45] For totalists, the value of saving lives in a 1:1 context would be very small (compared to one where there was no fertility reduction) as the value of saving one life is ‘negated’ by the disvalue of causing one less life to be created.

Roodman's report (if I recall correctly) suggested that this likely happens to a lower degree in areas where infant mortality is high (i.e. parents adjust fertility less in high infant mortality settings) so saving lives in these settings is plausibly still very valuable according to totalism.

Comments on “Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty”

This is a great summary of what I was and wasn't saying :)

Thanks for the link - looking forward to reading. Might return to this after reading

Load More