The discussion of AI risk is only recently mainstream and therefore amateurs made contributions within the past decade. I think this experience exacerbates the self-assuredness of nuke risk amateurs and leads them to not bother researching the expertise of the nuke community.

For decades, many experts have worked on nuke strategy, and have come up with at least a few risk-reducing paradigms:

  1. Arms control can work, and nations can achieve their achievable nuke goals (eg deterrence and maybe compellance) despite lower nuke counts and can save money doing so.
  2. Counterforce is (arguably) better than countervalue
  3. Escalation is arguably a ladder, not a binary on/off switch

Based on its history of at least partial risk-reducing success, academically rigorous argument, and the sheer number of thoughtful hours spent, the establishment nuke community has probably done a decent job and improvements are probably hard to find. One place to start is the book Wizards of Armageddon by Fred Kaplan. It isn't the best book on nuke strategy generally, but it focuses on the history of the nuke community, so it will hopefully engender at least some respect for the nuke community and inspire further reading. 

I think that in a field as well-established as nuke risk, improvements are more likely to be made on top of the existing field rather than by trying to re-invent the field.

Post-script: The EA community is criticized as unusefully amateurish in a recent podcast by a nuke professional https://www.armscontrolwonk.com/archive/1216048/the-wizards-of-armageddon/ but he does mention some positive work by Peter Scoblic, which I believe is https://forum.effectivealtruism.org/posts/W8dpCJGkwrwn7BfLk/nuclear-expert-comment-on-samotsvety-nuclear-risk-forecast-2 

Comments12
Sorted by Click to highlight new comments since: Today at 7:13 AM

Well, this isn’t how I wanted to start my engagement with the EA community. 

I wouldn’t call the efforts of the EA community amateurish; if I said it or implied it, I am wrong. I am actually really happy you exist.

Other things I actually think:

  1. The EA community is an important innovation in philanthropy similar to the rise of analytics in sports.  The reference to mosquito nets was not intended to be mocking.  On the contrary, I sincerely understand why someone would prefer to give based on data instead of superstition.  My community finds this uncomfortable for the same reason that dinosaurs don’t like asteroids.
  2. A lot of the work I see from the EA community on nuclear issues does leave me cold, but is that an EA problem or a nuclear wonk problem?  I think it’s classic GIGO– garbage in, garbage out. Who is responsible for the garbage inputs? For the most part, that would be us.  We have to provide much better data relevant to the questions of those looking to approach these problems analytically.
  3. The idea that people’s eyes glaze over when I talk at cocktail parties was intended to be self-deprecating.  My community is dominated by the obscure and arcane, and kept that way with an absurd amount of gatekeeping. I was trying to say “We’re boring and pedantic and maybe that’s why no one listens to us."

We need to do better – both providing better data and providing data that you need -- but I am slightly freaked out about the size of the gap we need to close.  I want to close that gap and I am kind of bummed if the way I said that in the podcast makes that less likely.  

TL;DR: I don’t think you suck, I think you are poorly served by those of us who make your data.  

Thanks for the engagement. To be clear, are you Jeffrey Lewis quoted in the post above? :) 

Yes, I am me.

Thanks Jeffrey! I hope we're a community where it doesn't matter so much whether you think we suck. If you think the EA community should engage more with nuclear security issues and should do so in different ways, I'm sure people would love to hear it. I would! Especially if you'd help answer questions like: How much can work on nuclear security reduce existential risk? What kind of nuclear security work is most important from an x-risk perspective?

I'd love to hear more about what your concerns and criticisms are. For example, I'd love to know: Is the Scoblic post the main thing that's informing your impression? Do you have views on this set of posts about the severity of a US-Russia nuclear exchange from Luisa Rodriguez (https://forum.effectivealtruism.org/s/KJNrGbt3JWcYeifLk)? Is there effective altruist funding or activity in the nuclear security space that you think has been misguided?

Kit
2y23
0
0

It seems extremely clear that working with the existing field is necessary to have any idea what to do about nuclear risk. That said, being a field specialist seems like a surprisingly small factor in forecasting accuracy, so I’m surprised by that being the focus of criticism.

I was interested in the criticism (32:02), so I transcribed it here:

Jeffrey Lewis: By the way, we have a second problem that arises, which I think Wizards really helps explain: this is why our field can’t get any money.

Aaron Stein: That’s true.

Jeffrey Lewis: Because it’s extremely hard to explain to people who are not already deep in this field how these deterrence concepts work, because they don’t get it. Like if you look at the work that the effective altruism community does on nuclear risk, it’s as misguided as SAC’s original, you know, approach to nuclear weapons, and you would need an entire RAND-sized outreach effort. And there are some people who’ve tried to do this. Peter Scoblic, who is fundamentally a member of that community, wrote a really nice piece responding to some of the like not great effective altruism assessments of nuclear risk in Ukraine. So I don’t want to, you know, criticise the entire community, but… I experience this at a cocktail party. Once I start talking to someone about nuclear weapons and deterrence… if they don’t do this stuff full-time, the popular ideas they have about this are… (a) they might be super bored, but if they are willing to listen, the popular ideas they have about it are so misguided, that it becomes impossible to make enough progress in a reasonable time. And that’s death when you’re asking someone to make you a big cheque. That’s much harder than ‘hi, I want to buy some mosquito nets to prevent malaria deaths’. That’s really straightforward. This… this is complex.

It’s a shame that this doesn’t identify any specific errors, although that is consistent with Lewis’ view that the errors can’t be explained in minutes, perhaps even in years.

Speaking for myself, I agree with Lewis that popular ideas about nuclear weapons can be wildly, bizarrely wrong. That said, I’m surprised he highlights effective altruism as a community he’s pessimistic about being able to teach. The normal ‘cocktail party’ level of discourse includes alluring claims like ‘XYZ policy is totally obvious; we just have to implement it’, and the effective altruism people I’ve spoken to on nuclear issues are generally way less credulous than this, and hence more interested in understanding how things actually work.

I am skeptical of attempts to gatekeep here. E.g. I found Scoblic's response to Samotsvety's forecast less persuasive than their post, and I am concerned here that "amateurish" might just be being used as a scold because the numbers someone came up with are too low for someone else's liking, or they don't like putting numbers on things at all and feel it gives a false sense of precision.

That isn't to say this is the only criticism that has been made, but just to highlight one I found unpersuasive.

I am not an expert, but personally I see the current crop of nuke experts as primarily "evangelizers of the wisdom of the past". The nuke experts of the past, such as Tom Schelling, are more impressive (and more mathematical). If a better approach to nuke risk was easy to find, it would have probably already been found by one of the many geniuses of the 20th century who looked at nuke risk. If so, the best place to make a marginal contribution to nuke risk is by evangelizing the wisdom of the past: this can help avoid backsliding on things like arms control treaties (this also raises the question of the tractability of a geopolitical approach to reducing risk versus preparation/adaptation to nuclear war's environmental damage and versus other non-nuke cause areas).

Speaking as someone who 1) has never been prompted to make any career or philanthropic decisions regarding nuclear risk reduction (and therefore not been motivated to think very rigorously about the subject), 2) may not have had a good sample/exposure to nuclear risk reduction advocacy (although I have had very little interaction with e.g., ICAN, which is a plus) 3) does not have formal academic or career experience in the nuclear realm; but 4) has been exposed to nuclear risk and strategy more so than the average person through personal research/curiosity, listening to podcasts on the subject, discussing the topic briefly with friends, and a summer position at the Center for Global Security Research:

I have long been skeptical of making serious net positive progress in the nuclear security realm, and every time I’ve tried to be open-minded about the idea of devoting lots of attention and resources to the subject, I’ve come away with equally if not more pessimistic views on the field. It often gives me the surface level feeling of watching people trying to kick down a brick wall, insisting that “it’s going to work, we just need more funding and time.” People like the NTI’s director get asked a straight question: “What are we going to do to reduce nuclear risk,” and she can’t seem to give a straight or compelling answer, just vague goal-wishing (vs. policy proposals, let alone compelling advocacy strategies) or policy proposals which seem like they may even introduce some risks (even if not adding risk on balance and possibly even reducing it)—assuming that the policy proposals were even politically tractable. Of course, much of this might not be so problematic, but then you hit a foundational issue: it seems very, very unlikely that we will face extinction due to nuclear war, whereas the probability of risks from alternative sources (e.g., engineered pandemics, unaligned AI) are much greater (at least in magnitude terms).

So, perhaps I don’t understand their perspective, as Dr. Lewis suggests in the podcast. However, when I have tried to understand their perspective—including by listening to hours of videos (talks) and podcasts by people in the nuclear risk field and reading various Bulletin/UCS articles—I haven’t seen a compelling case made by the traditional figures in the field. That’s not to say the field is hopeless, but I am fairly skeptical of many of the existing approaches’ likelihood of having much positive expected value if scaled. Perhaps it would have helped if Dr. Lewis made clear what EA doesn’t get, but I either missed it or he didn’t specify… (reinforcing my skepticism)

That was a lot of bottled up negativity and skepticism, but I’m happy that people are working on risk reduction as opposed to ~95% of other policy fields, I just want to see the work be more efficacy-oriented rather than prinicipalistic (among other desires).

I really appreciate many of the points mentioned herein, and understand/share some of the skepticism and concern.  These comments by Jeffrey:

and Harrison 

are interrelated to me & relevant to whether and how these communities get more involved with each other. There is much promising work to do, yet our field also needs to evolve. Perhaps we can create more nuclear expert/EA engagement opportunities. From the policy wonk side, we need to do so in open-minded and genuine ways if so. (FWIW, I think Jeffrey is a top-notch expert for such dialogue.) I'm at the start of a 2-week EA coworking experience, and the mutual benefits and learning were clear within the first hours of my time here. 

Definitely agree! We should definitely engage more with the field. I would note there's good stuff, eg here, here, here, here.

Who critiques EA, and at what timestamp in the podcast?

It's Dr. Jeffrey Lewis at 32:08

Curated and popular this week
Relevant opportunities