Thanks for your comment and for adding to Aron’s response to my post!
Before reacting point-by-point, one more overarching warning/clarification/observation: My views on the disvalue of numerical reasoning and the use of BOTECs in deeply uncertain situations are quite unusual within the EA community (though not unheard of, see for instance this EA Forum post on "Potential downsides of using explicit probabilities" and this GiveWell blog post on "Why we can’t take expected value estimates literally (even when they’re unbiased)" which acknowledge some of the concerns that motivate my skeptical stance). I can imagine that this is a heavy crux between us and that it makes advances/convergence on more concrete questions (esp. through a forum comments discussion) rather difficult (which is not at all meant to discourage engagement or to suggest I find your comments unhelpful (quite the contrary); just noting this in an attempt to avoid us arguing past each other).
Ah, I think mybe there is/was a misunderstanding here. I don't reject the claim that the forecasters are (much) better on average when using probabilities than when refusing to do so. I think my point here is that the questions we're talking about (what would be the full set of important* consequences of nuclear first-use or the full set of important* consequences of nuclear risk reduction interventions X and Z) are not your standard, well-defined and soon-to-be-resolved forecasting questions. So in a sense, the very fact that the questions at issue cannot be part of a forecasting experiment is one of the main reasons for why I think they are so deeply uncertain and hard to answer with more than intuitive guesswork (if they could be part of a forecasting experiment, people could test and train their skills at assigning probabilities by answering many such questions, in which case I guess I would be more amenable to the claim that assigning probabilities can be useful). The way I understood our disagreement, it was not about the predictive performance of actors who do vs don't (always) use probabilities, but rather about their decision quality. I think the actual disagreement may be that I think that there is a significant difference (for some decisions, high decision quality is not a neat function of explicit predictive ability), whereas you might be close to equating the two?
[*by "full set" I mean that this is supposed to include indirect/second-order consequences]
That said, I can't, unfortunately, think of any alternative ways to resolve the disagreement regarding the decision quality of people using vs. refusing to use probabilities in situations where assessing the effects of a decision/action after the fact is highly difficult... (While the comment added by Noah Scales contains some interesting ideas, I don't think it does anything to resolve this stalemate, since it is also focused on comparing & assessing predictive success for questions with a small set of known answer options)
One other thing, because I forgot about that in my last response:
"FInally, I am not that sure of your internal history but one worry would be if you decided long ago intuitively based on the cultural milieu that the right answer is 'the best intervention in nuclear policy is to try to prevent first use' and then subconsciously sought out supporting arguments. I am not saying this is what happened or that you are any more guilty of this than me or anyone else, just that it is something I and we all should be wary of."
-> I think this is a super important point, actually, and agree that it's a concern that should be kept in mind when reading my essay on this topic. I did have the intuitive aversion against focusing on tail end risks before I came up with all the supporting arguments; basically, this post came about as a result of me asking myself "Why do I think it's such a horrible idea to focus on the prevention of and preparation for the worst case of a nuclear confrontation?" I added a footnote to be more transparent about this towards the beginning of the post (fn. 2). Thanks for raising it!
Thanks for going through the "premises" and leaving your comments on each - very helpful for myself to further clarify and reflect upon my thoughts!
On P1 (that nuclear escalation is the main or only path to existential catastrophe):
On P2:
On P3: Thanks for flagging that, even after reading my post, you feel ill-equipped to assess my claim regarding the value of interventions for preventing first-use vs. interventions for preventing further escalation. Enabling readers to navigate, understand and form an opinion on claims like that one was one of the core goals that I started this summer's research fellowship with; I shall reflect on whether this post could have been different, or whether there could have been a complementary post, to better achieve this enabling function!
On P4: Haha yes, I see this now, thanks for pointing it out! I'm wondering whether renaming them "propositions" or "claims" would be more appropriate?
Thanks for taking the time to read through the whole thing and leaving this well-considered comment! :)
In response to your points:
1) Opportunity costs
2) Neglectedness:
3) High uncertainty around interventions: Similar thoughts to those expressed above. I have an unresolved tension in my mind when it comes to the value of preparedness interventions. I’m sympathetic to the case you’re making (heck, I even advocated (as a co-author) for general resilience interventions in a different post a few months ago); but, at the moment, I’m not exactly sure I know how to square that sympathy with the concerns I simultaneously have about preparedness rhetoric and action (at least in the nuclear risk field, where the danger of such rhetoric being misused seems particularly acute, given vested interests in maintaining the system and status-quo).
4) Civilizational Collapse:
What are the most promising strategies for reducing the risks posed by nuclear weapons / reducing the risk of nuclear war? What kinds of evidence or other arguments are available for finding effective strategies in this space?
Also agree with one of the other comments: would be interesting to hear some further elaboration on what EA gets wrong, or is in danger of getting wrong, in the nuclear space.
Nice dissection of the VWH and its possible points of weakness, found this very helpful for thinking through the argument(s) on surveillance as an intervention!
Here's one (not very decisive) comment to add to what you say about "Maybe we could change human values so nobody (or almost nobody) wants to cause global catastrophes? ": This could link to efforts for understanding and addressing "the root causes" of terrorism (and other kinds of extreme violence). Research and thinking on this seems very unconclusive and far from providing a clear recipe for interventions at this point; but given the problems of the mass-surveillance approach that you outline, "tackling root causes/motivations" might still be worth looking into as a potential alternative approach towards reducing the risk of global catastrophe caused by "bad actors".
Great post, thanks for writing this up! I'm especially impressed by the compilation and description of different types of motivating emotions, seems quite comprehensive and very relatable to me.
I have one question about a minor-ish point you make:
"This isn’t the case for everyone: some people may arrive at EA following a series of rational arguments void of strong emotional appeals."
I've been wondering about that sort of reasoning quite a bit in the past (often in response to something an EA-minded person said). How can you arrive at EA-ish conclusions and goals solely through a serious of rational arguments? Do you not need emotions to feature at some point in order to define and justify how and why you seek to "make the world a better place"? (In other words: How can you arrive at the "ought" solely through rational argument?)
I'm not an expert on the topic and don't have sources on hand that would make the argument in greater detail, but I did take a course on 'The global nuclear regime' (broadly about institutional developments surrounding nuclear material and weapons control since 1945) and based on my knowledge from that, I'd suggest that there is a way to reconcile the two sets of claims.
First, I think it's important to distinguish between 'surprise attack' and 'first strike'. The former is obviously a subset of the latter, but there are also other conceivable kinds of first strike attacks. Surprise attack, to me, sounds like an attack that is launched without an immediate trigger, with the purpose of hitting (and eliminating or severly weakening) an adversary unexpectedly. A nuclear first strike might, instead, be considered in a situation where a conflict is escalading to a point that a nuclear strike by the other party seems to be growing more likely. It might be considered as an instrument to prevent the other party from launching their missiles by hitting them first (e.g. because the costs of waiting for them to launch before counter-striking are considered unacceptable). This comes down to definitions, ultimately, but I think I wouldn't describe such a first strike as a surprise attack.
Second, there is not necessarily a contradiction between there being plans for first- rather than second-strike attacks and US officials expressing doubts about the USSR's belief in US willingness to actually conduct a first strike. The US figures you mention might have thought that in that moment, the likelihood of a US first-strike was really low and that hence it would've been surprising for the USSR to start the detection project at that moment. These US figures might also have been disingenious or biased when assessing the honesty of the USSR leadership (I would argue that the tendency to attach hidden, often propagandistics, motives to 'enemy leaders' - without strong evidence base or even a coherent plausibility argument as support - is fairly common among US 'hawks'). Debending on who the key US figures mentioned in your summary are (unfortunately, I haven't read The Dead Hand), it might also be that they just weren't aware of the first strike plans of the US. Lastly (and I don't consider that one super likely), it might be that the US figures just thought that the Soviet leadership wouldn't expect a US first strike in spite of the plans for it (either because the Soviets didn't know about the plans, or because they didn't think the US was likely to act on them).
I'm strongly drawn to that response. I remain so after reading this initial post, but am glad that you, by writing this sequence, are offering the opportunity for someone like me to engage with the arguments/ideas a bit more! Looking forward to upcoming installments!
Wrote this on my phone and wasn't offered the option to format the paragraph as a quote (and I don't know what the command is); might come back to edit and fix it later