AppliedDivinityStudies

Comments

Base Rates on United States Regime Collapse

Hey thanks for asking, it's the paragraphs from "Looking back" to "raw base rates to consider"

In some ways this feels like a silly throwback, on the other hand I think it is actually more worth reading now that we're not caught up in the heat of the moment. More selfishly, I didn't post on EA Forum when I first wrote this, but have since been encouraged to share old posts that might not have been seen.

Mundane trouble with EV / utility

Hey Ben, I think these are pretty reasonable questions and do not make you look stupid.

On Pascal's mugging in particular, I would consider this somewhat informal answer: https://nintil.com/pascals-mugging/ Though honestly, I don't find this super satisfactory, and it is something and still bugs me.

Having said that, I don't think this line of reasoning is necessary for answering your more practical questions 1-3.

Utilitarianism (and Effective Altruism) don't require that there's some specific metaphysical construct that is numerical and corresponds to human happiness. The utilitarian claim is just that some degree of quantification is, in principle, possible. The EA claim is that attempting to carry out this quantification leads to good outcomes, even if it's not an exact science.

GiveWell painstakingly compiles cost-effectiveness estimates numerical, but goes on to state that they don't view these as being "literally true". These estimates still end up being useful for comparing one charity relative to another. You can read more about this thinking here: https://blog.givewell.org/2017/06/01/how-givewell-uses-cost-effectiveness-analyses/

In practice, GiveWell makes all sorts of tradeoffs to attempt to compare goods like "improving education", "lives saved" or "increasing income". Sometimes this involves directly asking the targeted populations about their preferences. You can read more about their approach here: https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/2019-moral-weights-research

Finally, in the case of existential-risk, it's often not necessary to make these kinds of specific calculations at all. By one estimate, the Earth alone could support something like 10^16 human lives, and the universe could support somewhere something like 10^34 human life-years, or up to 10^56 "cybernetic human life-years". This is all very speculative, but the potential gains are so large that it doesn't matter if we're off by 40%, or 40x. https://en.wikipedia.org/wiki/Human_extinction#Ethics

Returning to the original point, you might ask if work on x-risk is then a case of Pascal's Mugging? Toby Ord gives the odds of human extinction in the next century at around 1/6. That's a pretty huge chance. We're much less confident what the odds are of EA preventing this risk, but it seems reasonable to think that it's some normal number. I.e. much higher than 10^-10. In that case, EA has huge expected value. Of course that might all seem like fuzzy reasoning, but I think there's a pretty good case to be made that our odds are not astronomically low. You can see one version of this argument here: https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/