S

Simon

29 karmaJoined

Posts
1

Sorted by New
11
· · 11m read

Comments
4

"I don't think that the Spanish flu made us more prepared against Covid-19" actually I'm betting our response to Covid-19 was better than it would have been without having had major pandemics in the past. For example, the response involved developing effective vaccines very quickly

Would be good more generally to have an updating record of the most important AI safety papers of each year 

Great post! A few reactions:

1. With space colonization, we can hopefully create causally isolated civilizations. Once this happens, the risk of a civilizational collapse falls dramatically, because of independence.

2. There are two different kinds of catastrophic risk: chancy, and merely uncertain. Compare flipping a fair coin (chancy) to flipping a coin that is either double headed or double tailed, but you don't know which (merely uncertain). If alignment is merely uncertain, then conditional on solving it once, we are in the double-headed case, and we will solve it again. Alignment might be like this: for example, one picture is that alignment might be brute forceable with enough data, but we just don't know whether this is so. At any rate, merely uncertain catastrophic risks do not have rerun risk, while chancy ones do. 

3. I'm a bit skeptical of demographic decline as a catastrophic risk, because of evolutionary pressure. If some groups stop reproducing, groups with high reproduction rates will tend to replace them. 

4. Regarding unipolar outcomes, you're suggesting a picture where unipolar outcomes have less catastrophic risk, but more lock-in risk. I'm unsure of this. First, unipolar world government might have higher risk of civil unrest. In particular, you might think that elites tend to treat residents better because of fear of external threats; without that threat, they may exploit residents more, leading to higher civil unrest. Second, unipolar AI outcomes may have higher risk of going rogue than multipolar, because in multipolar outcomes, humans may have extra value to AIs as a partner in competition against other AIs.