Haydn has been a Research Associate and Academic Project Manager at the University of Cambridge's Centre for the Study of Existential Risk since Jan 2017.
I think he quit his PhD actually. So you could ask him why, and what factors people should consider when choosing to do a PhD or deciding to change while on it.
<Before that he did a PhD in the Philosophy of Machine Learning at Cambridge, on the topic of "to what extent is the development of artificial intelligence analogous to the biological and cultural evolution of human intelligence?">
I'm very pro framing this as an externality. Doesn't just help with left-leaning people, it can also be helpful for talking to other audiences, such as those immersed in economics or antitrust/competition law.
For more on this risk, see this interesting recent book: Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity (Jun. 2020) Daniel Deudney
This is really fascinating and useful work, thanks for putting it together (and everyone who contributed)!
Oof this comment was a shame to read - I downvoted it. Ad hominem attack and no discussion of the content of the paper.
Also, the paper has ten authors and got through Nature peer-review - seems a stretch to write it off as just two people's ideology.
Just to respond to the nuclear winter point.
I actually think the EA world has been pretty good epistemically on winter: appropriately humble and exploratory, mostly funding research to work out how big a problem it is, not basing big claims on (possibly) unsettled science. The argument for serious action on reducing nuclear risk doesn't rely on claims about nuclear winter - though nuclear winter would really underline its importance. The Rethink Priorities report you critique talks at length about the debate over winter, which is great. See also 80,000 Hours profile, which is similarly cautious/hedged.
The EA world has been the major recent funder of research on nuclear winter: OpenPhil in 2017, 2020, perhaps Longview, and soon FLI. The research has advanced considerably since 2016. Indeed, most of the research ever published on nuclear winter has been published in the last few years, using the latest climate modelling. The most recent papers are getting published in Nature. I would disagree that theres a "reliance on papers that have a number of obvious flaws".
So as I see it the main phenomenon is that there's just much more being posted on the forum. I think there's two factors behind that 1) community growth and 2) strong encouragement to post on the Forum. Eg there's lots of encouragement to post on the forum from: the undergraduate introductory/onboarding fellowships, the AGI/etc 'Fundamentals' courses, the SERI/CERI/etc Summer Fellowships, or this or this (h/t John below).
The main phenomenon is that there is a lot more posted on the forum, mostly from newer/more junior people. It could well be the case that the average quality of posts has gone down. However, I'm not so sure that the quality of the best posts has gone down, and I'm not so sure that there are fewer of the best posts every month. Nevertheless, spotting the signal from the noise has become harder.
But then the forum serves several purposes. To take two of them: One (which is the one commenters here are most focussed on) is "signal" - producing really high-quality content - and its certainly got harder to find that. But another purpose is more instrumental - its for more junior people to demonstrate their writing/reasoning ability to potential employees. Or its to act as an incentive/endgoal for them to do some research - where the benefit is more that they see whether its a fit for them or not, but they wouldn't actually do the work if it wasn't structured towards writing something public.
So the main thing that those of us who are looking for "signal" need to do is find better/new ways to do so. The curated posts are a postive step in this direction, as are the weekly summaries and the monthly summaries.
This is really great work! Very clearly structured and written, persuasively argued and (fairly) well supported by the evidence.
I’m currently doing my PhD/DPhil on the history of arms control agreements, and 1972 is one of my four case-studies. So obviously I think its really important and interesting, and that more people should know about it – and I have a lot of views on the subject! So I’ve got a few thoughts on methodology, further literature and possible extensions which I’ll share below. But they’re all to adding to what is excellent work.
Its a bit unclear to me what your claim is for the link between these Track II discussions and the ultimate outcome of the two 1972 agreements. Its not that they were sufficient (needed SALT negotiations, and even then needed Kissinger/Dobrynin backchannel). Is it that the discussions were necessary for the outcome? Or just that they contributed in a positive way? I would be interested in your view.
The limitations section is good. But I think you could have been even clearer on the limits and strengths of a ‘single N’ approach. The limits are how much this can be generalised to the entire ‘universe of cases’. However, single N also has strengths - its most useful for developing and exploring mechanisms. So I think you could frame your contribution as exploring and deepening an analysis of the mechanisms. For example, something like "Two main mechanisms are proposed in the literature, this case study provides strong evidence for mechanism 1 (conveying new conceptions/ideas) and demonstrates how it works".
On another point, I'd be concerned that if you chose this case because it was one of the most successful Track II cases you'd be ‘selecting on the dependent variable’ (apologies for the political science jargon – something like “cherrypicked for having a particular outcome”) . Can you justify your motivation and case-selection differently, for example as one of (the?) biggest and most sustained Track 2 dialogues? e.g. you say: “when the first Pugwash conference happened in 1957, there were either no, or almost no, other opportunities for Soviet and American scientists to have conversations about security policy and nuclear issues”
Adler + Schelling are great on the US side of the story. I assume you would be familiar with them, but I don’t see them cited. If you haven’t read them, you’re in for a treat – they’re great, and largely agree with you.
If you want to go down a tangent, you might want to engage with new line of argument that many US nuclear policymakers never accepted the parity of MAD, but continued seeking advantage (Green and Long 2017; Green 2020; Lieber and Press 2006, 2020; Long and Green 2015).
As a sidenote, I’m curious why so much of the research on the two nuclear 1972 agreements focusses on ABM. ABM is the more intellectually interesting and counterintuitive. But its not clear to me it was *more important* then the limits on offensive weapons though.
Next steps/possible extensions
My impression is your main audiences are funders (and to a lesser extent general researchers and activists) within GCR. However if you wanted to adapt it, this very plausibly could be a paper. Its already a paper length, ~8,000 words. If you wanted to go down that route, there's a few things I'd do:
If you wanted to continue this research, you could contrast this case with a similar conference and see what the difference in outcomes was; or try and draw up a list of the whole universe of cases (all major Track II dialogues).
Hmm I strongly read it as focussed on magnitude 7. Eg In the paper they focus on magnitude 7 eruptions, and the 1/6 this century probability: "The last magnitude-7 event was in Tambora, Indonesia, in 1815." / "Given the estimated recurrence rate for a magnitude-7 event, this equates to more than US$1 billion per year." This would be corroborated by their thread, Forum post, and previous work, which emphasise 7 & 1/6.
Sorry to be annoying/pedantic about this. I'm being pernickety as I view a key thrust of their research as distinguishing 7 from 8. We can't just group magnitude 7 (1/6 chance) along with magnitude 8 and write them off as a teeny 1/14,000 chance. We need to distinguish 7 from 8, consider their severity/probability seperately, and prioritise them differently.
Hi Pablo and Matthew, just a quick one:
"Michael Cassidy and Lara Mani warn about the risk from huge volcanic eruptions. Humanity devotes significant resources to managing risk from asteroids, and yet very little into risk from supervolcanic eruptions, despite these being substantially more likely. The absolute numbers are nonetheless low; super-eruptions are expected roughly once every 14,000 years. Interventions proposed by the authors include better monitoring of eruptions, investments in preparedness, and research into geoengineering to mitigate the climatic impacts of large eruptions or (most speculatively) into ways of intervening on volcanoes directly to prevent eruptions."
However, their Nature paper is about magnitude 7 eruptions, which may have a probability this century of 1/6, not supervolcanic eruptions (magnitude 8), which as you point out have a much lower probability.
I think its a fascinating paper that in a prominent, rigorous and novel way applies importance/neglectedness/tractability to a comparison of two hazards:
"Over the next century, large-scale volcanic eruptions are hundreds of times more likely to occur than are asteroid and comet impacts, put together. The climatic impact of these events is comparable, yet the response is vastly different. ‘Planetary defence’ receives hundreds of millions of dollars in funding each year, and has several global agencies devoted to it. [...] By contrast, there is no coordinated action, nor large-scale investment, to mitigate the global effects of large-magnitude eruptions. This needs to change."