I believe that there may be systematic issues with how Effective Altruism works when dealing with issues outside the core competence of its general membership, particularly in areas related to defense policy. These areas are complex and unlike many fields EA operates in, are often short of good public-facing explanations, which may make them hard for outside researchers, such as those from EA organizations, to understand in short order. This raises the possibility of getting basic, uncontroversial details wrong, which can both render analyses inaccurate and ruin the credibility of EA with experts in the field.
A bit of background: I've been fascinated by the defense world for over 20 years, and have spent the last 5 working for a major defense contractor. I write a blog, mostly covering naval history, at navalgazing.net, and it was this work which brought me into contact with what appears to be the most in-depth evaluation of nuclear war risks by EA, work conducted by Luisa Rodriguez for Rethink Priorities.
Unfortunately, while this was a credible effort to assess the risks of nuclear war, unfamiliarity with the field meant that a number of errors crept in, ranging from the trivial to the serious. The most obvious are in the analysis of the survivability of the US and Russian nuclear arsenals.
For instance, when discussing the sea-based deterrent, the article states that "[America's] submarines’ surfaces are covered in plastic, which disperses radar signals instead of reflecting them." This is clearly a reference to the anechoic tiles used on modern submarines, but these are intended to protect from sonar, not radar. This probably traces to the source used, an article that was confusingly written by someone who clearly doesn't know all that much about submarine design and ASW sensors, but it is exactly the sort of error which flags an article as being written by someone who doesn't really know what they're talking about.
But that's merely a nitpick, and there's also a very basic flaw with the assumption that nearly all of the warheads aboard SSBNs will survive. While I completely agree that any submarine at sea is nearly invulnerable (and am in fact rather more skeptical than some of the sources that improved technology will render the seas transparent) a substantial fraction of the SSBN force is in port, not at sea. The US attempts to minimize this by providing each submarine with two crews and swapping them out, but even with this, each operational submarine still spend about a third of its time in port between patrols. (It will spend more time in overhauls, but sending ships to the yard with missiles aboard is considered bad form, so those warheads probably won't count against the US total.) How these would fare in a war would depend heavily on the situation leading up to the outbreak of war. If Russia launched a surprise attack, the bases at Bremerton and Kings Bay would undoubtedly be the highest-priority targets. If there had been substantial warning, then most would likely have been ordered to sea to strengthen the US deterrent.
The description of the strategic bomber force bears little connection to the reality of said force and contains even more jarring errors, such as describing the bombers as "air-based". The second paragraph begins with the following statement: "While many strategic bombers are concentrated at air bases and aircraft carriers, making them potentially convenient to target, early warning systems would likely give US pilots enough time to take off before the nuclear warheads arrived at their targets and detonated."
Every part of this sentence is wrong. While the US Navy did operate aircraft that were at least arguably strategic bombers, they were retired from this role in the mid-60s. Nuclear strike with lighter aircraft remained a major carrier role for the rest of the Cold War, all shipboard nuclear weapons except the SLBMs were withdrawn in the early 90s. Whether or not they have nuclear weapons onboard, aircraft carriers are exceedingly inconvenient to target. And there is little prospect of any strategic bombers getting off the ground if they are caught unaware. During the Cold War, Strategic Air Command did keep aircraft on emergency alert, capable of scrambling and getting out of range in the interval between detecting incoming ICBMs and the missiles reaching the base. But this was never more than a minority of the bomber force, and the practice ended with the fall of the Soviet Union. Even more strongly than the SSBN force, the survivability of the bomber force will depend heavily on how much warning is available. At a minimum, it is likely to take an hour or more to load the weapons and brief the crew, a serious problem when a nuclear warhead will reach the base in 15 minutes.
A similar lack of understanding comes out in the discussion of the bombers themselves. The US has a mix of B-2s equipped with gravity bombs and B-52s equipped with cruise missiles, but the B-52s are completely ignored, and the discussion of stealth technology doesn't make much sense. The discussion of ICBMs is somewhat better, and while I disagree with the pessimism on missile defense, the position taken by the author is at least colorable.
Similar problems plague the section on Russia, although lack of information and my being less familiar with their setup make them harder to analyze. Particularly notable is citing a 2001 report on Russian submarine readiness, as that marks the nadir of funding for Russian strategic forces. After Putin came to power, more funding flowed to Russia's nuclear forces, and while poor readiness due to corruption certainly cannot be ruled out, it's also far from certain that this is the case.
The other articles written for the Rethink Priorities series on nuclear war have fewer basic errors, probably because they are on subjects that are slightly less opaque and reliant on domain knowledge. I would argue that the risk of nuclear winter is substantially overstated thanks to reliance on papers that have a number of obvious flaws, and which John Schilling and I had critiqued on the EA-adjacent SSC in 2016. Guarding against that sort of problem is a separate issue, and a rather more difficult one than I can address here.
The basic lesson of all this is the importance of domain knowledge in both understanding and analyzing a problem, and in making sure that those who deal with that problem professionally will take you seriously. Obviously, in the fields EA has the most involvement in, this is unlikely to be an issue, but it could recur as EA looks into new issues, and should be guarded against by trying to find and work with people who are familiar with the domain.
Hey. Thanks for pointing these out and we appreciate your engagement with our work! I'm not the original author, though I did help review and evaluate this research at Rethink Priorities.
We received feedback from many experts at the time, and didn't run into any issues with people not taking us seriously. I also think the details you mentioned, while important, don't do much to undermine our reports' various bottom-line conclusions. Unfortunately we often lack time to properly vet every detail, but we're very happy to have our errors pointed out and correct our posts.
Also I do generally think the dynamic you describe can be a concerning one. There is a tricky balance to start between bringing a new perspective to a space you've never worked in before vs. spending a lot of time on deep solicitation from experts. I think that there can be a lot of value from someone outside a field coming in and evaluating something from a new angle, but it comes with all of the downsides you list. Now at Rethink Priorities we encourage our staff more to take time to build expertise, and that we're less pressured to produce fast output, relative to when this post was published.
I think my point should have been phrased less as "people will definitely not take you seriously" and more as "people might not take you seriously". If I was looking for a reason to toss something, the sort of errors here would provide excellent ammo.
More broadly, I'm glad that you guys are trying to address this. I do think that defense is particularly tricky, for reasons I'm still trying to write up, but I also don't have the expertise to critique other areas.
Just to respond to the nuclear winter point.
I actually think the EA world has been pretty good epistemically on winter: appropriately humble and exploratory, mostly funding research to work out how big a problem it is, not basing big claims on (possibly) unsettled science. The argument for serious action on reducing nuclear risk doesn't rely on claims about nuclear winter - though nuclear winter would really underline its importance. The Rethink Priorities report you critique talks at length about the debate over winter, which is great. See also 80,000 Hours profile, which is similarly cautious/hedged.
The EA world has been the major recent funder of research on nuclear winter: OpenPhil in 2017, 2020, perhaps Longview, and soon FLI. The research has advanced considerably since 2016. Indeed, most of the research ever published on nuclear winter has been published in the last few years, using the latest climate modelling. The most recent papers are getting published in Nature. I would disagree that theres a "reliance on papers that have a number of obvious flaws".
Wait. OpenPhil gave money to Toon and Robock? Wow. If I'd know that, I would have written a very sharp criticism of that particular decision.
>Indeed, most of the research ever published on nuclear winter has been published in the last few years, using the latest climate modelling.
The problem isn't climate modeling. The problem is that one of the inputs to the model is wrong by, conservatively, a factor of 50.
>The most recent papers are getting published in Nature. I would disagree that theres a "reliance on papers that have a number of obvious flaws".
Peer review is a useful process, but not perfect, hence the existence of the replication crisis. In this case, there's a couple of papers that keep popping up in more recent literature as the source for soot estimates that are extremely bad. But a typical peer reviewer for nature would have no reason to critique those papers, and doesn't have the expertise to realize how bonkers some of the assumptions in them are.
I would strongly disagree that nuclear weapons are any sort of existential risk. There aren't nearly enough to wipe out humanity directly, and haven't ever been, and nuclear winter risk is massively overblown, for reasons I explain in the link I just added to the post.
"You know, the nuclear weapons threat has not meaningfully changed since the day I was born in 1952."
This, I would disagree with quite strongly. The nuclear threat has changed several times since then. At that point, arsenals were quite limited. By the late 50s, the US had a huge arsenal, but delivery was by bombers only. The arrival of the ICBM meant that warning times dropped from hours to minutes, which had all sorts of impacts, but the early ICBMs took a while to launch. At about the same time, you see SSBNs, which make it a lot harder to squish the enemy's deterrent. And since the end of the Cold War, you see a massive decrease in arsenals worldwide.
I agree nuclear winter risk is overblown and I'm glad to see more EAs discussing that. But I think you're also overrating the survivability of SSBNs, especially non-American ones. They are not a One Weird Trick, Just Add Water for unassailable second strike capability, with upkeep/maintenance only being one aspect of that. Geography plays a huge role in how useful they are, with the US deciding to base most of its warheads on SSBNs because they have the most favourable conditions for them (unrestricted access to and naval dominance of two oceans). In contrast, Russia has much less room to play with (mainly some parts of the Arctic ocean) and suitable ports to deploy the subs from, and China's situation is even worse. The seas surrounding it have unfavourable bathymetry (very shallow) and the only paths to open ocean are chokepoints. It's not as hard to detect a submarine as one might think, otherwise noise-quieting measures like pumpjets, reactor cooling design and tiles as you mentioned wouldn't be such a huge deal. Most importantly, the US has a large fleet of advanced attack subs (SSNs) the others lack, which pose an enormous threat to SSBNs. They could pick up a tail without knowing it and be destroyed before they can launch their missiles.
OTOH American SSBNs should be fine at least for the time being as long as they don't do anything stupid like try to sneak up close to another country. But emerging technologies like Magnetic Anomaly Detectors and such will make concealing SSBNs even more difficult and increase reliance on land-based forces in the future.
In fact from what I know about Chinese nuclear strategy, SSBNs aren't expected to play a major role in the nuclear force at least until Taiwan is taken and the first island chain is broken, granting unrestricted access to the Pacific. ↩︎
Which is why the concept of "bastion operations" was developed, a sanitized area of water close to the coast where SSBNs can operate relatively safely, supported by friendly air and naval ASW assets to keep hostile SSNs out. Yes China and Russia can do this but it's still suboptimal for many reasons. ↩︎
In retrospect, I should have been more clear in my claim on submarine invulnerability, which was mostly meant to apply to the sort of thing you could reliably do during an attempt to preemptively take out a nuclear arsenal. And yes, obviously more to the US than elsewhere. But note that the link you provide is to an SSN, not an SSBN, and MAD is not a new technology. The first deployment of that I'm aware of was to guard the Straits of Gibraltar in WWII, and if anything it's being phased out these days.
Ok, I suppose that's a useful semantic clarification. I agree they don't pose an X risk to humanity, but they do pose an X risk to modern civilization.
Sorry, but the nuclear threat has not meaningfully changed since the day I was born in 1952. It would only take about 50 nukes to destroy America's largest cities, which would collapse the food distribution system, leading to mass starvation, social and political chaos.
If you have a gun pointed at my head, it doesn't matter how big or small the gun is, how many guns you have, and all of that.
There's no meaningful difference between the Russians having 50 nukes, 1500 nukes, 5000 nukes, or 5 million nukes. Once they have 50, X risk to America is in play. As example, North Korea will soon have enough nukes to demolish America. They just need to get the long range delivery systems working.
All these fancy calculations the experts love to make are really meaningless, they're mostly just attempts to position themselves as experts.
>Sorry, but the nuclear threat has not meaningfully changed since the day I was born in 1952.
This simply isn't true. Even if we take your claim that it would only take 50 nukes to destroy America's largest cities at face value and that that in turn would be enough to destroy the US, in 1952, the Soviets had only 50 nukes total, and very limited capability to deliver them to targets in US. Most would instead be going to Europe, and a lot of them wouldn't go off because the planes carrying them would be shot down. And this is pre-H-bomb, so you're going to need more than 50 bombs to do the same destruction that you could do with 50 today. (And to be clear, I don't accept that 50 bombs is nearly enough to pose an X-risk to America.)
>As example, North Korea will soon have enough nukes to demolish America. They just need to get the long range delivery systems working.
China has cut them off at six missiles, which are aimed at cities where US decision-makers and their families live.
Meaning of SSBN according to Wikipedia:
Bean, any chance I can get your thoughts on the recent Xia et al study on nuclear winter? Would be curious what you make of it. It makes nuclear winter sound catastrophic beyond even Sagan scenarios (5 billion dead due to famine etc). Would love an expert’s take.
Robock is second author, Toon is also on the author's list. It's the same people who have been poisoning this particular well for decades, so I'd toss it right there. I have no reason to trust them to actually be doing science, and lots of reason to believe that they're being driven by ideology.
Oof this comment was a shame to read - I downvoted it. Ad hominem attack and no discussion of the content of the paper.
Also, the paper has ten authors and got through Nature peer-review - seems a stretch to write it off as just two people's ideology.
I was perhaps unclear in my original comment. I wrote up a long explanation the many, many errors those two have made in their nuclear winter models at https://www.navalgazing.net/Nuclear-Winter, which I assumed that Henry had read. A quick glance at the paper in question turns up that they're using the very models of soot production I critique. My expertise in agriculture is quite limited, so I can't say anything about how a given amount of soot will affect crop production. I can say that they're relying on a model so terrible that I genuinely don't think a good-faith effort would produce anything that bad. It's pretty hard to explain how the models get worse at exactly the rate that arsenals shrink, so the nuclear war situation stays the same otherwise. The stuff in the 80s was probably exaggerated somewhat, but it's clear nonsense with arsenals an order of magnitude smaller today.
Appreciate you explaining the downvote. While a more legible argument than "I don't trust X because of what I perceive to be a long pattern of bad behavior I'm not going to specify much" would be much more useful, I still find this more useful than not commenting at all, so others have at least a pointer to investigate further themselves.
I suppose the downside of purely ad hominem arguments is that it often just smears the target for too often unjustified reasons. But for me a charitable interpretation is that the author of the ad hominem wants to be helpful/informative and just doesn't have the time (or maybe legible or non-confidential information) to do more than say they don't trust the person.
Link to the study here