SW

Sarah Weiler

215 karmaJoined Oct 2020Innsbruck, Österreich

Sequences
1

Wrapping my head around the nuclear risks cause area

Comments
9

Our descendants are unlikely to have values that are both different from ours in a very significant way and predictable. Either they have values similar to ours or they have values we can’t predict. Therefore, trying to predict their values is a waste of time and resources.

I'm strongly drawn to that response. I remain so after reading this initial post, but am glad that you, by writing this sequence, are offering the opportunity for someone like me to engage with the arguments/ideas a bit more! Looking forward to upcoming installments!

Wrote this on my phone and wasn't offered the option to format the paragraph as a quote (and I don't know what the command is); might come back to edit and fix it later

Thanks for your comment and for adding to Aron’s response to my post!

Before reacting point-by-point, one more overarching warning/clarification/observation: My views on the disvalue of numerical reasoning and the use of BOTECs in deeply uncertain situations are quite unusual within the EA community (though not unheard of, see for instance this EA Forum post on "Potential downsides of using explicit probabilities" and this GiveWell blog post on "Why we can’t take expected value estimates literally (even when they’re unbiased)" which acknowledge some of the concerns that motivate my skeptical stance). I can imagine that this is a heavy crux between us and that it makes advances/convergence on more concrete questions (esp. through a forum comments discussion) rather difficult (which is not at all meant to discourage engagement or to suggest I find your comments unhelpful (quite the contrary); just noting this in an attempt to avoid us arguing past each other).

  • On moral hazards:
    • In general, my deep-seated worries about moral hazard and other normative adverse effects feel somewhat inaccessible to numerical/empirical reasoning (at least until we come up with much better empirical research strategies for studying complex situations). To be completely honest, I can’t really imagine arguments or evidence that would be able to substantially dissolve the worries I have. That is not because I’m consciously dogmatic and unwilling to budge from my conclusions, but rather because I don’t think we have the means to know empirically to what extent these adverse effects actually exist/occur. It thus seems that we are forced to rely on fundamental worldview-level beliefs (or intuitions) when deciding on our credences for their importance. This is a very frustrating situation, but I just don’t find attempts to escape it (through relatively arbitrary BOTECs or plausibility arguments) in any sense convincing; they usually seem to me to be trying to come up with elaborate cognitive schemes to diffuse a level of deep empirical uncertainty that simply cannot be diffused (given the structure of the world and the research methods we know of).
    • To illustrate my thinking, here’s my response to your example:
      • I don’t think that we really know anything about the moral hazard effects that interventions to prepare for nuclear winter would have had on nuclear policy and outcomes in the Cold War era.
      • I don’t think we have a sufficiently strong reason to assign the 20% reduction in nuclear weapons to the difference in perceived costs of nuclear escalation after research on nuclear winter surfaced.
      • I don’t think we have any defensible basis for making a guess about how this reduction in weapons stocks would have been different had there been efforts to prepare for nuclear winter in the 1980s.
      • I don’t think it is legitimate to simply claim that fear of nuclear-winter-type events has no plausible effect on decision-making in crisis situations (either consciously or sub-consciously, through normative effects such as those of the nuclear taboo). At the same time, I don’t think we have a defensible basis for guessing the expected strength of this effect of fear (or “taking expected costs seriously”) on decision-making, nor for expected changes in the level of fear given interventions to prepare for the worst case.
      • In short, I don’t think it is anywhere close to feasible or useful to attempt to calculate “the moral hazard term of loss in net effectiveness of the [nuclear winter preparation] interventions”.
  • On the cost-benefit analysis and tractability of food resilience interventions:
    • As a general reaction, I’m quite wary of cost-effectiveness analyses for interventions into complex systems. That is because such analyses require that we identify all relevant consequences (and assign value and probability estimates to each), which I believe is extremely hard once you take indirect/second-order effects seriously. (In addition, I’m worried that cost-effectiveness analyses distract analysts and readers from the difficult task of mapping out consequences comprehensively, instead focusing their attention on the quantification of a narrow set of direct consequences.)
    • That said, I think there sometimes is informational value in cost-effectiveness analyses in such situations, if their results are very stark and robust to changes in the numbers used. I think the article you link is an example of such a case, and accept this as an argument in favor of food resilience interventions.
    • I also accept your case for the tractability of food resilience interventions (in the US) as sound.
    • As far as the core argument in my post is concerned, my concern is that the majority of post-nuclear war conditions gets ignored in your response. I.e., if we have sound reasons to think that we can cost-effectively/tractably prepare for post-nuclear war food shortages but don’t have good reasons to think that we know how to cost-effectively/tractably prepare for most of the other plausible consequences of nuclear deployment (many of which we might have thus far failed to identify in the first place), then I would still argue that the tractability of preparing for a post-nuclear war world is concerningly low. I would thus continue to maintain that preventing nuclear deployment should be the primary priority (in other words: your arguments in favor of preparation interventions don’t address the challenge of preparing for the full range of possible consequences, which is why I still think avoiding the consequences ought to be the first priority).

Ah, I think mybe there is/was a misunderstanding here. I don't reject the claim that the forecasters are (much) better on average when using probabilities than when refusing to do so. I think my point here is that the questions we're talking about (what would be the full set of important* consequences of nuclear first-use or the full set of important* consequences of nuclear risk reduction interventions X and Z) are not your standard, well-defined and soon-to-be-resolved forecasting questions. So in a sense, the very fact that the questions at issue cannot be part of a forecasting experiment is one of the main reasons for why I think they are so deeply uncertain and hard to answer with more than intuitive guesswork (if they could be part of a forecasting experiment, people could test and train their skills at assigning probabilities by answering many such questions, in which case I guess I would be more amenable to the claim that assigning probabilities can be useful). The way I understood our disagreement, it was not about the predictive performance of actors who do vs don't (always) use probabilities, but rather about their decision quality. I think the actual disagreement may be that I think that there is a significant difference (for some decisions, high decision quality is not a neat function of explicit predictive ability), whereas you might be close to equating the two?

[*by "full set" I mean that this is supposed to include indirect/second-order consequences]

That said, I can't, unfortunately, think of any alternative ways to resolve the disagreement regarding the decision quality of people using vs. refusing to use probabilities in situations where assessing the effects of a decision/action after the fact is highly difficult... (While the comment added by Noah Scales contains some interesting ideas, I don't think it does anything to resolve this stalemate, since it is also focused on comparing & assessing predictive success for questions with a small set of known answer options)

One other thing, because I forgot about that in my last response:

"FInally, I am not that sure of your internal history but one worry would be if you decided long ago intuitively based on the cultural milieu that the right answer is 'the best intervention in nuclear policy is to try to prevent first use' and then subconsciously sought out supporting arguments.  I am not saying this is what happened or that you are any more guilty of this than me or anyone else, just that it is something I and we all should be wary of."

-> I think this is a super important point, actually, and agree that it's a concern that should be kept in mind when reading my essay on this topic. I did have the intuitive aversion against focusing on tail end risks before I came up with all the supporting arguments; basically, this post came about as a result of me asking myself "Why do I think it's such a horrible idea to focus on the prevention of and preparation for the worst case of a nuclear confrontation?" I added a footnote to be more transparent about this towards the beginning of the post (fn. 2). Thanks for raising it!

Thanks for going through the "premises" and leaving your comments on each - very helpful for myself to further clarify and reflect upon my thoughts!

On P1 (that nuclear escalation is the main or only path to existential catastrophe): 

  • Yes, I do argue for the larger claim that a one-time deployment of nuclear weapons could be the start of a development that ends in existential catastrophe even if there is no nuclear escalation. 
  • I give a partial justification of that in the post and in my comment to Aron, 
  • but I accept that it's not completely illegitimate for people to continue to disagree with me; opinions on a question like this rest on quite foundational beliefs, intuitions, and heuristics, and two reasonable people can, imo, have different sets of these. 
  • (Would love to get into a more in-depth conversation on this question at some point though, so I'd suggest putting it on the agenda for the next time we happen to see each other in-person :)!)

On P2:

  • Your suggested reformulation ("preventing the first nuclear deployment is more tractable because preventing escalation has more unknowns") is pretty much in line with what I meant this premise/proposition to say in the context of my overall argument. So, on a high-level, this doesn't seem like a crux that would lead the two of us to take a differing stance on my overall conclusion.
  • You're right that I'm not very enthusiastic about the idea of putting actual probabilities on any of the event categories I mention in the post (event categories: possible consequences of a one-time deployment of nukes; conceivable effects of different types of interventions). We're not even close to sure that we/I have succeeded in identifying the range of possible consequences (pathways to existential catastrophe) and effects (of interventions), and those consequences and effects that I did identify aren't very specific or well-defined; both of these seem like necessary  prudent steps to precede the assignment of probabilities. I realize while writing that you will probably just once again disagree with that leap I made (from deep uncertainty to rejecting probability assignment), and that I'm  not doing much to advance our discussion here. Apologies! On your specific points: correct, I don't think we can advance much beyond an intuitive, extremely uncertain assignment of probabilities; I think that the alternative (whose existence you deny) is to acknowledge our lack of reasonable certainty about these probabilities and to make decisions in the awareness that there are these unknowns (in our model of the world); and I (unsurprisingly) disagree that institutions or people that choose this alternative will do systematically worse than those that always assign probabilities.
  • (I don't think the start-up analogy is a good one in this context, since venture capitalists get to make many bets and they receive reliable and repeated feedback on their bets. Neither of these seem particularly true in the nuclear risk field (whether we're talking about assigning probabilities to the consequences of nuclear weapons deployment or about the effects of interventions to reduce escalation risk / prepare for a post-nuclear war world).)

On P3: Thanks for flagging that, even after reading my post, you feel ill-equipped to assess my claim regarding the value of interventions for preventing first-use vs. interventions for preventing further escalation. Enabling readers to navigate, understand and form an opinion on claims like that one was one of the core goals that I started this summer's research fellowship with; I shall reflect on whether this post could have been different, or whether there could have been a complementary post, to better achieve this enabling function!

On P4: Haha yes, I see this now, thanks for pointing it out! I'm wondering whether renaming them "propositions" or "claims" would be more appropriate?

Thanks for taking the time to read through the whole thing and leaving this well-considered comment! :)

In response to your points:

1) Opportunity costs

  • “I do not know how a specialization would look like that is only relevant at the 100 to 1000 nukes step. I know me not being able to imagine such a specialization is only a weak argument but I am also not aware of anyone only looking at such a niche problem.” - If this is true and if people who express concern mainly/only for the worst kinds of nuclear war are actually keen on interventions that are equally relevant for preventing any deployment of nuclear weapons, then I agree that the opportunity cost argument is largely moot. I hope your impressions of the (EA) field in this regard are more accurate than mine!
  • My main concern with preparedness interventions is that they may give us a false sense of ameliorating the danger of nuclear escalation (i.e., “we’ve done all these things to prepare for nuclear winter, so now the prospect of nuclear escalation is not quite as scary and unthinkable anymore”). So I guess I’m less concerned about these interventions the more they are framed as attempts to increase general global resilience, because that seems to de-emphasize the idea that they are effective means to substantially reduce the harms incurred by nuclear escalation. Overall, this is a point that I keep debating in my own mind and where I haven’t come to a very strong conclusion yet: There is a tension in my mind between the value of system slack (which is large, imo) and the possible moral hazard of preparing for an event that we should simply never allow to occur in the first place (i.e.: preparation might reduce the urgency and fervor with which we try to prevent the bad outcome in the first place).
  • I mostly disagree on the point about skillsets: I think both intervention targets (focus on tail risks vs. preventing any nuclear deployment) are big enough to require input from people with very diverse skillsets, so I think it will be relatively rare for a person to be able to only meaningfully contribute to either of the two. In particular, I believe that both problems are in need of policy scholars, activists, and policymakers and a focus on the preparation side might lead people in those fields to focus less on the preventing any kind of nuclear deployment goal.

2) Neglectedness: 

  • I think you’re empirically right about the relative neglectedness of tail-ends & preparedness within the nuclear risk field. 
  • (I’d argue that this becomes less pronounced as you look at neglectedness not just as “number of people-hours” or “amount of money” dedicated to a problem, but also factor in how capable those people are and how effectively the money is spent (I believe that epistemically rigorous work on nuclear issues is severely neglected and I have the hope that EA engagement in the field could help ameliorate that).)
  • That said, I must admit that the matter of neglectedness is a very small factor in convincing me of my stance on the prioritization question here. As explained in the post, I think that a focus on the tail risks and/or on preparedness is plausibly net negative because of the intractability of working on them and because of the plausible adverse consequences. In that sense, I am glad that those two are neglected and my post is a plea for keeping things that way.

3) High uncertainty around interventions: Similar thoughts to those expressed above. I have an unresolved tension in my mind when it comes to the value of preparedness interventions. I’m sympathetic to the case you’re making (heck, I even advocated (as a co-author) for general resilience interventions in a different post a few months ago); but, at the moment, I’m not exactly sure I know how to square that sympathy with the concerns I simultaneously have about preparedness rhetoric and action (at least in the nuclear risk field, where the danger of such rhetoric being misused seems particularly acute, given vested interests in maintaining the system and status-quo).

4) Civilizational Collapse:

  • My claim about civilization collapse in the absence of the deployment of multiple nukes is based on the belief that civilizations can collapse for reasons other than weapons-induced physical destruction. 
  • Some half-baked, very fuzzy ideas of how this could happen are: destruction of communities’ social fabric and breakdown of governance regimes; economic damage, breakdown of trade and financial systems, and attendant social and political consequences; cyber warfare, and attendant social, economic, and political consequences. 
  • I have not spent much time trying to map out the pathways to civilizational collapse and it could be that such a scenario is much less conceivable than I currently imagine. I think I’m currently working on the heuristic that societies and societal functioning is hyper-complex and that I have little ability to actually imagine how big disruptions (like a nuclear conflict) would affect them, which is why I shouldn’t rule out the chance that such disruptions cascade into collapse (through chains of events that I cannot anticipate now). 
  • (While writing this response, I just found myself staring at the screen for a solid 5 minutes and wondering whether using this heuristic is bad reasoning or a sound approach on my part; I lean towards the latter, but might come back to edit this comment if, upon reflection, I decide it’s actually more the former)
Answer by Sarah WeilerAug 28, 202220

What are the most promising strategies for reducing the risks posed by nuclear weapons / reducing the risk of nuclear war? What kinds of evidence or other arguments are available for finding effective strategies in this space?

Also agree with one of the other comments: would be interesting to hear some further elaboration on what EA gets wrong, or is in danger of getting wrong, in the nuclear space.

Nice dissection of the VWH and its possible points of weakness, found this very helpful for thinking through the argument(s) on surveillance as an intervention!

Here's one (not very decisive) comment to add to what you say about "Maybe we could change human values so nobody (or almost nobody) wants to cause global catastrophes? ":  This could link to efforts for understanding and addressing "the root causes" of terrorism (and other kinds of extreme violence). Research and thinking on this seems very unconclusive and far from providing a clear recipe for interventions at this point; but given the problems of the mass-surveillance approach that you outline, "tackling root causes/motivations" might still be worth looking into as a potential alternative approach towards reducing the risk of global catastrophe caused by "bad actors".

Great post, thanks for writing this up! I'm especially impressed by the compilation and description of different types of motivating emotions, seems quite comprehensive and very relatable to me.

I have one question about a minor-ish point you make:

"This isn’t the case for everyone: some people may arrive at EA following a series of rational arguments void of strong emotional appeals."

I've been wondering about that sort of reasoning quite a bit in the past (often in response to something an EA-minded person said). How can you arrive at EA-ish conclusions and goals solely through a serious of rational arguments? Do you not need emotions to feature at some point in order to define and justify how and why you seek to "make the world a better place"? (In other words: How can you arrive at the "ought" solely through rational argument?)

I'm not an expert on the topic and don't have sources on hand that would make the argument in greater detail, but I did take a course on 'The global nuclear regime' (broadly about institutional developments surrounding nuclear material and weapons control since 1945) and based on my knowledge from that, I'd suggest that there is a way to reconcile the two sets of claims. 

First, I think it's important to distinguish between 'surprise attack' and 'first strike'. The former is obviously a subset of the latter, but there are also other conceivable kinds of first strike attacks. Surprise attack, to me, sounds like an attack that is launched without an immediate trigger, with the purpose of hitting (and eliminating or severly weakening) an adversary unexpectedly. A nuclear first strike might, instead, be considered in a situation where a conflict is escalading to a point that a nuclear strike by the other party seems to be growing more likely. It might be considered as an instrument to prevent the other party from launching their missiles by hitting them first (e.g. because the costs of waiting for them to launch before counter-striking are considered unacceptable). This comes down to definitions, ultimately, but I think I wouldn't describe such a first strike as a surprise attack.

Second, there is not necessarily a contradiction between there being plans for first- rather than second-strike attacks and US officials expressing doubts about the USSR's belief in US willingness to actually conduct a first strike. The US figures you mention might have thought that in that moment, the likelihood of a US first-strike was really low and that hence it would've been surprising for the USSR to start the detection project at that moment. These US figures might also have been disingenious or biased when assessing the honesty of the USSR leadership (I would argue that the tendency to attach hidden, often propagandistics, motives to 'enemy leaders' - without strong evidence base or even a coherent plausibility argument as support - is fairly common among US 'hawks'). Debending on who the key US figures mentioned in your summary are (unfortunately, I haven't read The Dead Hand), it might also be that they just weren't aware of the first strike plans of the US. Lastly (and I don't consider that one super likely), it might be that the US figures just thought that the Soviet leadership wouldn't expect a US first strike in spite of the plans for it (either because the Soviets didn't know about the plans, or because they didn't think the US was likely to act on them).