kokotajlod

kokotajlod's Comments

Why Don’t We Use Chemical Weapons Anymore?

Great post. I think it focuses too much on the use of chemical weapons against enemy soldiers, however. IMO chemical weapons were and almost always have been thought of as terror weapons. For example, before WW2 it was feared that squadrons of bombers would drop chemical weapons all over european cities on Day 1 of the next war. Instead, they dropped propaganda leaflets and focused on military targets, and then gradually escalated to bombing and then firebombing cities.

True, civilian populations can be equipped with anti-chemical-weapon gear. But even so, my guess is that chemical bombs would have been effective terror weapons. Imagine if during the Blitz, instead of 100% conventional weapons, they had gone for a 80 - 20 mix of conventional and gas, many of the gas weapons being timed release so that hours after the air raid was over the gas would start hissing out.

Another piece of evidence is that the Allies shipped huge amounts of chemical weapons to Italy during their invasion, presumably in case they needed them. (They didn't; in fact a German air raid accidentally set off chemical weapons and caused massive casualties. Quote: "From the start, Allied High Command tried to conceal the disaster, in case the Germans believed that the Allies were preparing to use chemical weapons, which might provoke them into preemptive use, but there were too many witnesses to keep the secret, and in February 1944, the U.S. Chiefs of Staff issued a statement admitting to the accident and emphasizing that the U.S. had no intention of using chemical weapons except in the case of retaliation.")

As for Stalin: Did the USSR have large chemical weapon stockpiles? Maybe they didn't. Maybe they figured their poorly equipped troops would fare worse in a chemical weapon fight than the Germans. (The Germans, meanwhile, perhaps thought that if they used chemical weapons against the Russians, the Brits and USA would retaliate against Germany.)

Epistemic status: Just presenting some pushback/counter-evidence. Not sure what to think, ultimately. Probably the truth is a combination of both factors, I'd guess.

My thoughts on Toby Ord’s existential risk estimates

In general I think you've thought this through more carefully than me so without having read all your points I'm just gonna agree with you.

So yeah, I think the main problem with Tobias' original point was that unknown risks are probably mostly new things that haven't arisen yet and thus the lack of observed mini-versions of them is no evidence against them. But I still think it's also true that some risks just don't have mini-versions, or rather are as likely or more likely to have big versions than mini-versions. I agree that most risks are not like this, including some of the examples I reached for initially.

Cortés, Pizarro, and Afonso as Precedents for Takeover

Update: Turns out it returns to the topic of Cortes at the end of the book. It confirms what wikipedia says, that smallpox arrived after Cortes had already killed the emperor and fled the city. I think it also exaggerates the role of smallpox even then, actually -- it makes it sound like Cortes' "first assault" on the city failed because the city was too strong and then his "second assault" succeeded because it was weakened by disease. But (a) his "first assault" was just him and his few hundred followers killing the Emperor and escaping, and his "second assault" came after a long siege and involved 200,000 native warriors helping him plus additional Spaniards with siege weapons etc. Totally different things. And (b) smallpox didn't just strike Tenochtitlan, it hit everywhere, including Cortes' native allies. And (c) The final battle for Tenochtitlan was intense; he didn't exactly walk in over the corpses of smallpox-ridden defenders, he had to fight his way in against a gigantic army of determined defenders. So I still stand by my claim that disease had fairly little to do with Cortes' victory, even though 1493, a book which I otherwise respect, says otherwise. (And by "fairly little" I mean "not so much that my conclusions in the post are undermined.")

My thoughts on Toby Ord’s existential risk estimates
Likewise, AI can arguably be seen as a continuation of past technological, intellectual, scientific, etc. progress in various ways. Of course, various trends might change in shape, speed up, etc. But so far they do seem to have mostly done so somewhat gradually, such that none of the developments would've been "ruled out" by expecting the future to looking roughly similar to the past or the past+extrapolation. (I'm not an expert on this, but I think this is roughly the conclusion AI Impacts is arriving at based on their research.)

I agree with all this and don't think it significantly undermines anything I said.

I think the community has indeed developed more diverse views over the years, but I still think the original take (as seen in Bostrom's Superintelligence) is the closest to the truth. The fact that the community has gotten more diverse can be easily explained as the result of it growing a lot bigger and having a lot more time to think. (Having a lot more time to think means more scenarios can be considered, more distinctions made, etc. More time for disagreements to arise and more time for those disagreements to seem like big deals when really they are fairly minor; the important things are mostly agreed on but not discussed anymore.) Or maybe you are right and this is evidence that Bostrom is wrong. Idk. But currently I think it is weak evidence, given the above.

My thoughts on Toby Ord’s existential risk estimates

Tobias' original point was " Also, if engineered pandemics, or "unforeseen" and "other" anthropogenic risks have a chance of 3% each of causing extinction, wouldn't you expect to see smaller versions of these risks (that kill, say, 10% of people, but don't result in extinction) much more frequently? But we don't observe that. "


Thus he is saying there aren't any "unknown" risks that do have common mini-versions but just haven't had time to develop yet. That's way too strong a claim, I think. Perhaps in my argument against this claim I ended up making claims that were also too strong. But I think my central point is still right: Tobias' argument rules out things arising in the future that clearly shouldn't be ruled out, because if we had run that argument in the past it would have ruled out various things (e.g. AI, nukes, physics risks, and come to think of it even asteroid strikes and pandemics if we go far enough back in the past) that in fact happened.

My thoughts on Toby Ord’s existential risk estimates

Yeah in retrospect I really shouldn't have picked nukes and natural pandemics as my two examples. Natural pandemics do have common mini-versions, and nukes, well, the jury is still out on that one. (I think it could go either way. I think that nukes maybe can kill everyone, because the people who survive the initial blasts might die from various other causes, e.g. civilizational collapse or nuclear winter. But insofar as we think that isn't plausible, then yeah killing 10% is way more likely than killing 100%. (I'm assuming we count killing 99% as killing 10% here?) )

I think AI, climate change tail risks, physics risks, grey goo, etc. would be better examples for me to talk about.

My thoughts on Toby Ord’s existential risk estimates

I feel the need to clarify, by the way, that I'm being a bit overly aggressive in my tone here and I apologize for that. I think I was writing quickly and didn't realize how I came across. I think you are making good points and have been upvoting them even as I disagree with them.

My thoughts on Toby Ord’s existential risk estimates

I think there are some risks which have "common mini-versions," to coin a phrase, and others which don't. Asteroids have mini-versions (10%-killer-versions), and depending on how common they are the 10%-killers might be more likely than the 100%-killers, or vice versa. I actually don't know which is more likely in that case.

AI risk is the sort of thing that doesn't have common mini-versions, I think. An AI with the means and motive to kill 10% of humanity probably also has the means and motive to kill 100%.

Natural pandemics DO have common mini-versions, as you point out.

It's less clear with engineered pandemics. That depends on how easy they are to engineer to kill everyone vs. how easy they are to engineer to kill not-everyone-but-at-least-10%, and it depends on how motivated various potential engineers are.

Accidental physics risks (like igniting the atmosphere, creating a false vacuum collapse or black hole or something with a particle collider) are way more likely to kill 100% of humanity than 10%. They do not have common mini-versions.

So what about unknown risks? Well, we don't know. But from the track record of known risks, it seems that probably there are many diverse unknown risks, and so probably at least a few of them do not have common mini-versions.

And by the argument you just gave, the "unknown" risks that have common mini-versions won't actually be unknown, since we'll see their mini-versions. So "unknown" risks are going to be disproportionately the kind of risk that doesn't have common mini-versions.

...

As for what I meant about making the exact same argument in the past: I was just saying that we've discovered various risks that don't have common mini-versions, which at one point were unknown and then became known. Your argument basically rules out discovering such things ever again. Had we listened to your argument before learning about AI, for example, we would have concluded that AI was impossible, or that somehow AIs which have the means and motive to kill 10% of people are more likely than AIs which pose existential threats.

Cortés, Pizarro, and Afonso as Precedents for Takeover

I'm reading it now; it is indeed a very good book. I don't think it supports the claim that disease hit the Aztecs before Cortes arrived--it makes a brief one-sentence claim to that effect, but other sources (e.g. wikipedia) say the opposite, and give more details (e.g. they say it arrived with the expedition sent to capture Cortes). And of course there's still Afonso.

My thoughts on Toby Ord’s existential risk estimates

Yeah I take back what I said about it being substantially less likely, that seems wrong.

Load More