All of AndreaSR's Comments + Replies

Yeah, I've had the same thought. But as far as I can tell, it still doesn't add up, so I figured there must be something else going on. Thanks for your reply, though.

Thanks for your reply. I'm glad my calculation doesn't seem way off. Still feel like it's too obvious a mistake for it not to have been caught, if it indeed were a mistake...

Thanks for your answer. I don't think I under stand what you're saying, though. As I understand it, it makes a huge difference to the resource distribution that longtermism recommends, because if you allow for e.g. Bostrom's 10^52 happy lives to be the baseline utility, avoiding x-risk becomes vastly more important than if you just consider the 10^10 people alive today. Right?

1
djbinder
3y
In principal I agree, although in practice there are other mitigating factors which means it doesn't seem to be that relevant. This is partly because the 10^52 number is not very robust. In particular, once you start postulating such large numbers of future people I think you have to take the simulation hypothesis much more seriously, so that the large size of the far future may in fact be illusory. But even on a more mundane level we should probably worry that achieving 10^52 happy lives might be much harder than it looks. It is partly also because at a practical level the interventions long-termists consider don't rely on the possibility of 10^52 future lives, but are good even over just the next few hundred years. I am not aware of many things that have smaller impacts and yet still remain robustly positive, such that we would only pursue them due to the 10^52 future lives. This is essentially for the reasons that asolomonr gives in their comment.

Thanks for your reply. A follow-up question: when I see the 'cancelling out'-argument, I always wonder why it doesn't apply to the x-risk case itself. It seems to me that you could just as easily argue that halting biotech research in order to enter the Long Reflection might backfire in some unpredictable way, or that aiming at Bostrom's utopia would ruin the chances of ending up in a vastly better state that we had never even dreamt of - and so on and so forth.

Isn't the whole case for longtermism so empirically uncertain as to be open to the 'cancelling out'-argument as well?

 

Hope it makes sense what I'm saying.

1
Harrison Durland
3y
I do understand what you are saying, but my response (albeit as someone who is not steeped in longtermist/X-risk thought) would be "not necessarily (and almost certainly not entirely)."The tl;dr version is "there are lots of claims about X-risks and interventions to reduce x-risks that are reasonably more plausible than their reverse-claim." e.g., there are decent reasons to believe that certain forms of pandemic preparations reduce x-risk more than they increase x-risk. I can't (yet) give full, formalistic rules for how I apply the trimming heuristic, but some of the major points are discussed in the blocks below. One key to using/understanding the trimming heuristic is that it is not meant to directly maximize the accuracy of your beliefs, rather it's meant to improve the effectiveness of your overall decision-making *in light of constraints on your time/cognitive resources. * If we had infinite time to evaluate everything--even possibilities that seem like red herrings--it would probably (usually) be optimal to do so, but we don't have infinite time so we have to make decisions as to what to spend our time analyzing and what to accept as "best-guesstimates" for particularly fuzzy questions. Here, intuition (including "when should we rely on various levels of intuition/analysis") can be far more effective than formalistic rules. I think another key is to understand the distinction between risk and uncertainty: (to heavily simplify) risk refers to confidently verifiable/specific probabilities (e.g., a 1/20 chance of rolling a 1 on a standard 20-sided die) whereas uncertainty refers to when we don't confidently know the specific degree of risk (e.g., the chance of rolling a 1 on a confusingly-shaped 20-sided die which has never rolled a 1 yet, but perhaps might eventually). In the end, I think my 3-4-ish conditions or at least factors for using the trimming heuristic are: 1. There is a high degree of uncertainty associated with the claim (e.g., it is not a well