I appreciate the answers so far!
One thing I realized I'm curious about in asking this is something about how many groups of people/ governing bodies are actually crazy enough to use nuclear weapons even if self-annihilation is assured. This seems like an interesting last check against horrible mutual destruction stuff. The hypothesis to invalidate is: maybe the types of people assembled into the groups we call "governments" are very unlikely to carry an "activate mutual destruction" decision all the way through. To be clear, I don't believe this, and I think there is good evidence that individuals will do this, but I feel sufficiently confused about the gov dynamic to ask.
Of all the national regimes and regional ruling factions since 1950, how many would have used nukes even if they new an adversary would retaliate with overwhelming force? Have there been any real situations where non-great power govts were pushed so far as to resort to nuclear (enemy + self) destruction?
For example, my extremely amateur read makes it seem like Israel was at least somewhat close to nuclear in the Yom Kippur War. And I'd guess that some of the more insane genocide-y civil war factions like the Khmer Rouge wouldn't have been that concerned about the self-destruction bit, though I don't know enough history to say for sure, or if they were ever pushed to a breaking point.
I'm familiar with all the standard US-Russia examples of this (I think) and when I put my skeptic hat on/ try to steelman it seems like its hard to know how many additional "filters" would need to be cleared before actual launch. I'd be interested in cases where something of the form "and then [the gov't or civil war faction or w/e] took some action which they indisputably believed at the time would lead to a large scale tragedy, destroy themselves and all their loved ones, etc". Cases where the group definitely believed they slapped "defect" in the mutually assured destruction game (at least on some scale). Maybe none exist outside of cults and terrorist groups? Though some of those group might be more govt-like than others.
Great set of links, appreciate it. Was especially excited to see lukeprog's review and the author's presentation of Atomic Obsession.
I'm inclined toward answers of the form "seems like they would have been used more or some civilizational factor would need to change" (which is how I interpret Jackson's answer on strong global policing). Which is why I'm currently most interested in understanding the Atomic Obsession-style skeptical take.
If anyone is interested, the following are some of the author's claims which seem pertinent, at least as far as I can tell (from the author's summary, a couple reviews, and a few chapters but not the whole book):
It seems like the first two are pretty straightforwardly true. (3) is most interesting, and I haven't been able to make Mueller's argument crisp for myself on this point. My attempt at breaking down (3), with some of my own attempt at steelmanning:
a) Nuclear weapons are really expensive
b) Gaining nuclear weapons upsets your neighbors, which is an additional cost
c) There are cheaper ways of getting a more compelling deterrent, for example North Korea could invest in artillery to put more pressure on Seoul.
d) Countries didn't really have any interest in going to war, anyway, so deterrents were not needed (I think he claims something about Stalin and other communist powers having no interest in war with western powers)
e) Nukes are technically complex and even if smaller actors, possibly including e.g. factions in a civil war, were to steal them, they would have a hard time using them
f) Nukes are easy to police because nuclear forensics are quite good at attributing events to their creators
g) People have to be really crazy to use nuclear weapons given they aren't very effective on military targets and can't actually help you win, only suicide
(It seems worth mentioning that in my actual cursory read of Mueller's arguments in the form mentioned above, I found some points I've omitted because they seem mutually inconsistent and make him seem dogmatic to me. For example at one point in his nuclear terrorism section he seems to use the fact that the CIA would probably have infiltrated a group as evidence for the overarching claim that investment in counter-proliferation is wasted. The contradiction is obviously that the CIA probably wouldn't invest as much in infiltrating terrorist groups attempting to build nukes if that was less of a priority. )
If we take my hypothetical to mean "nuclear weapons are cheaper to build" (sorry for the ambiguity there) then a, b, c and e seem basically null. I read d) as pretty far removed from the facts. Some good evidence for this in the comments of the lukeprog post especially Max Daniel's.
Which leaves f- Nukes are easy to police, and g- people aren't crazy enough to actually use them.
Re direct military conflicts between nuclear weapons states: this might not exactly fit the definition of "direct" but I enjoyed skimming the mentions of nuclear weapons in this wikipedia on the yom kippur war, which saw a standoff between Israel (nuclear) and Egypt (not nuclear, but had reportedly been delivered warheads by USSR). There is some mention of Israel "threatening to go nuclear" possibly as a way of forcing the US to intervene with conventional military resources.
Interesting! For (1) how do you expect the economic superpowers to respond to smaller nations using nuclear weapons in this world? It sounds like because of MAD between the large nations, your model is that they must allow small nuclear conflicts, or alternatively pivot into your scenario 2 of increased global policing, is that correct?
Thanks for this post Luisa! Really nice resource and I wish I caught it earlier. A couple methodology questions:
Why do you choose an arithmetic mean for aggregating these estimates? It seems like there is an argument to be made that in this case we care about order-of-magnitude correctness, which would imply taking the average of the log probabilities. This is equivalent to the geometric mean (I believe) and is recommended for fermi estimates e.g. (here)[https://www.lesswrong.com/posts/PsEppdvgRisz5xAHG/fermi-estimates].
Do you have a sense for how much, if any, these estimates are confounded by the variable of time? Are all estimates trying to guess likelihood of war in the few years following the estimate, or do some have longer time horizons (you mention this explicitly for a number of them, but struggling to find for all. Sorry if I missed)? If these are forecasting something close to the instantaneous yearly probability, do you think we should worry about adjusting estimates by when they were made, in case i.e. a lot has changed between 2005 and now?
Related to the above, do you believe risk of nuclear war is changing with time or approximately constant?
Did you consider any alternative schemes to weighting these estimates equally? I notice that for example the GJI estimate on US-Russia nuclear war is more than an order of magnitude lower than the rest, but is also the group I'd put my money on based on forecasting track record. Do you find these estimates approximately equally credible?
Curious for your thoughts!
Stumbling on this today-did this article ever get published? Would be keen to read
Strong +1 to this. I think I have observed people who have really good academic research taste but really bad EA research taste
Taste is huge! I was trying to roll this under my "Process" category, where taste manifests in choosing the right project, choosing the right approach, choosing how to sequence experiments, etc etc. Alas, not a lossless factorization
These exercises look quite neat, thanks for sharing!
Thanks Seb. I don't think I have energy to fully respond here, possibly I'll make a separate post to give this argument its full due.
One quick point relevant to Crux 2:
"I can also think of many examples of groundbreaking basic science that looks defensive and gets published very well (e.g. again sequencing innovations, vaccine tech; or, for a recent example, several papers on biocontainment published in Nature and Science)."
I think there are many-fold differences in impact/dollar between the tech you build if you are trying to actually solve the problem and the type of probably-good-on-net examples you give here.
Other ways of saying parallels of this point: