All posts

New & upvoted

Monday, 27 June 2022
Mon, 27 Jun 2022

Community 4
Building effective altruism 4
AI safety 2
Existential risk 2
Nuclear security 2
Forecasting 1
More

Frontpage Posts

Quick takes

Looking for help: what's the opposite of counterfactual reasoning -- in other words: when EAs encourage counterfactual reasoning, what do they discourage? I ask because I'm writing about good epistemic practices and mindsets. I am trying to structure my writing as a list of opposites (scout mindset vs soldier mindset, numerical vs verbal reasoning, etc).  Would it be correct to say that in the case of counterfactual reasoning there is no real opposite? Rather, the appropriate contrast is: "counterfactual reasoning done well vs. counterfactual reasoning done badly"?
Recently, I was reading David Thorstad’s new paper “Existential risk pessimism and the time of perils”. In it, he models the value of reducing existential risk on a range of different assumptions. The headline result is that 1) most plausibly, existential risk reduction is not overwhelmingly valuable–though it may still be quite valuable, it doesn’t probably swamp all other cause areas. And 2) thinking that extinction is more likely tends to weaken the case for existential risk reduction rather than strengthen it. It struck me that one of the results is particularly interesting, I call it the repugnant solution: If we can reduce existential risk to 0% per century across all future centuries, this act is infinitely valuable, even if the initial risk was absolutely tiny and each century is only just of positive value. This act is therefore, better than basically anything else we could do. Perhaps, in a Pascalian way, if we think there is a tiny chance that some particular action will lead to a permanent reduction in existential risk, that act too is infinitely valuable, and everything breaks. This is also true even if we decrease the value of each century from “really amazingly great” to “only just net positive”.

Topic Page Edits and Discussion