All of Spencer Chapman's Comments + Replies

Thanks v much - your third para was a much better explanation of what I was driving at!

1
MichaelA
4y
Btw, there's a section on "Comparing and combining risks" in Chapter 6 of The Precipice (pages 171-173 in my version), which is very relevant to this discussion. Appendix D expands on that further. I'd recommend interested people check that out.

At risk of sounding foolish, this seems odd "There can't be too many things that reduce the expected value of the future by 10%; if there were, there would be no expected value left. So, the prior that any particular thing has such an impact should be quite low."

But, if we lived in a world where there are 10 death-stars lurking around in the heavens. And, all of these are very likely to obliterate the earth and reduce the expected value of the Earth significantly. Then can't the EV detraction of each individual death star be (say) 90... (read more)

3
MichaelA
4y
I also found that passage odd, though I think for a somewhat different reason (or at least with a different framing). For me, the passage reminded me of the O-ring theory of economic development, "which proposes that tasks of production must be executed proficiently together in order for any of them to be of high value". For the sake of the argument, let's make the very extreme and unlikely assumption that, if no longtermists worked on them, each of AI risk, biorisk, and nuclear war would by themselves be enough to guarantee an existential catastrophe that reduces the value of the future to approximately 0. In that case, we might say that the EV of the future is ~0, and even if we were to totally fix one of those problems, or even two of them, the EV would still be ~0. Therefore, the EV of working on any one or two of the problems, viewed in isolation, is 0. But the EV of fixing all three would presumably be astronomical. We could maybe say that existential catastrophe in this scenario is overdetermined, and so we need to remove multiple risks in order for catastrophe to actually not happen. This might naively make it look like many individual prevention efforts were totally worthless, and it might indeed mean that they are worthless if the other efforts don't happen, but it's still the case that, altogether, that collection of efforts is extremely valuable. This also sort-of reminds me of some of 80k/Ben Todd's comments on attributing impact, e.g.: I haven't taken the time to work through how well this point holds up when instead each x-risk causes less than 100%, e.g. 10%, chance of existential catastrophe if there were no longtermists working on it. But it seems plausible that there could be more than 100% worth of x-risks, if we add it up across the centuries/millenia, such that, naively, any specified effort to reduce x-risks that doesn't by itself reduce the total risk to less than 100% appears worthless. So I think the point that, in a sense, only so ma
1
Rohin Shah
4y
See this comment thread.