Thanks for writing this! I like the post a lot. This heuristic is one of the criteria we use to evaluate bio charities at Founders Pledge (see the "Prioritize Pathogen- and Threat-Agnostic Approaches" section starting on p. 87 of my Founders Pledge bio report).
One reason that I didn't see listed as one of your premises is just the general point about hedging against uncertainty: we're just very uncertain about what a future pandemic might look like and where it will come from, and the threat landscape only becomes more complex with technological adva...
Hi Ulrik, thanks for this comment! Very much agreed on the communications failures around aerosolized transmission. I wonder how much the mechanics of transmission would enter into a policy discussion around GUV (rather than a simplified “These lights can help suppress outbreaks.”)
An interesting quote relevant to bio attention hazards from an old CNAS report on Aum Shinrikyo:
"This unbroken string of failures with botulinum and anthrax eventually convinced the group that making biological weapons was more difficult than Endo [Seiichi Endo, who ran the BW program] was acknowledging. Asahara [Shoko Asahara, the founder/leader of the group] speculated that American comments on the risk of biological weapons were intended to delude would-be terrorists into pursuing this path."
Footnote source in the report: "Interview with Fumihiro ...
Thanks for this post! I'm not sure cyber is a strong example here. Given how little is known publicly about the extent and character of offensive cyber operations, I don't feel that I'm able to assess the balance of offense and defense very well
Longview’s nuclear weapons fund and Founders Pledge’s Global Catastrophic Risks Fund (disclaimer: I manage the GCR Fund). We recently published a long report on nuclear war and philanthropy that may be useful, too. Hope this helps!
Just saw reporting that one of the goals for the Biden-Xi meeting today is "Being able to pick up the phone and talk to one another if there’s a crisis. Being able to make sure our militaries still have contact with one another."
I had a Forum post about this earlier this year (with my favorite title) Call Me, Maybe? Hotlines and Global Catastrophic Risks with a section on U.S.-China crisis comms, in case it's of interest:
..."For example, after the establishment of an initial presidential-level communications link in 1997, Chinese leaders did not respond
There is currently just one track 2/track 1.5 diplomatic dialogue between the U.S. and China that focuses on strategic nuclear issues. ~$250K/year is roughly my estimate for what it would cost to start one more
China and India. Then generally excited about leveraging U.S. alliance dynamics and building global policy advocacy networks, especially for risks from technologies that seem to be becoming cheaper and more accessible, e.g. in synthetic biology
I think in general, it's a trade-off along the lines of uncertainty and leverage -- GCR interventions pull bigger levers on bigger problems, but in high-uncertainty environments with little feedback. I think evaluations in GCR should probably be framed in terms of relative impact, whereas we can more easily evaluate GHD in terms of absolute impact.
This is not what you asked about, but I generally view GCR interventions as highly relevant to current-generation and near-term health and wellbeing. When we launched the Global Catastrophic Risks Fund last year,...
I think: read a lot, interview a lot of people who are smarter (or more informed, connected, etc.) than I am about the problem, snowball sample from there, and then write a lot.
I wonder if FP's research director, @Matt_Lerner, has a better answer for me, or for FP researchers in general
Thanks for the question! In 3 years, this might include:
A few that come to mind:
Definitely difficult. I think my colleagues' work at Founders Pledge (e.g. How to Evaluate Relative Impact in High-Uncertainty Contexts) and iterating on "impact multipliers" to make ever-more-rigorous comparative judgments is the most promising path forward. I'm not sure that this is a problem unique to GCRs or climate. A more high-leverage risk-tolerant approach to global health and development faces the same issues, right?
Maybe, but I'm not really qualified to say much about this. I do think we need to think beyond New START (which was going to need a follow-on agreement anyway), and beyond arms control as "formal, legally binding, ratified treaties." I think some nonprofits and funders have been playing a very reactive kind of whack-a-mole game when it comes to nuclear security, reacting to the latest news about new weapon systems, doctrinal changes, and current events. Instead, are there ways to think bigger about arms control, to make some of these ideas more politically...
I would say cash-constrained. There are plenty of good opportunities out there, and a field of smart scholars, advocates, and practitioners with transferable skills. Just need a lot more money
Thanks for the question, Johannes! My best elevator pitch is roughly an ITN case that starts with neglectedness:
The biggest funder in nuclear security just withdrew from the field, leaving only ~$32 million/year in philanthropic funding. That's a third of the budget of Oppenheimer, and several orders of magnitude smaller than philanthropic spending on climate change. This is a huge blow to a field that's already small and aging, and would leave defense contractors and bureaucrats to determine nuclear policy. But it's also an opportunity to reshape the fiel...
Hi Angelina! Thanks for the great question! There are several government actors, like DTRA, the Minerva Research Initiative, and the STRATCOM Academic Alliance, that play an important role in the non-governmental nuclear security space, including with funding. Then there are the National Labs as well as FFRDCs and UARCs that receive government funding and often work on relevant issues. Then there are defense contractors that will provide funding to think tanks and organizations that just so happen to support the latest weapons...
This does help answer the question, but it conflates extinction risk with existential risk, which is I think a big mistake in general. This chapter in How Worlds Collapse does a nice job of explaining this:
..."Currently, existential risk scholars tend to focus on events and processes for which a fairly direct, simple story can be told about how they could lead to extinction. [...] However, if there is a substantial probability that collapse could destroy humanity's longterm potential [including by recovery with bad values], this should change one's view of ca
For anyone who is interested, Founders Pledge has a longer report on this (with a discussion of funding constraints as well as funding ideas that could absorb a lot of money), as well as some related work on specific funding opportunities like crisis communications hotlines.
Arturo, thank you for this comment and the very kind words!
I really like your point about beneficially "dual-use" interventions, and that we might want to look for right-of-boom interventions with near-term positive externalities. I think that's useful for market-shaping and for political tractability (no one likes to invest in something that their successor will take credit for) -- and it's just a good thing to do!
It feels similar to the point that bio-risk preparedness has many current-gen benefits, like Kevin Esvelt's point here that "Crucially, a...
Thanks, Vasco. I totally forgot to reply to your comment on my previous post -- my apologies!
I think you raise a good general point that we'd expect societal spending after a catastrophe to be high, especially given the funder behavior we see for newsworthy humanitarian disasters.
There are a few related considerations here, all of them touching on the issue you also raise: "Coming up with good interventions in little time may be harder."
Just a note that the Likert scale in the poll is not symmetrical ("Agree" vs. "Strongly Disagree")
Agree with Johannes here on the bias in much of the nuclear winter work (and I say that as someone who thinks catastrophic risk from nuclear war is under-appreciated). The political motivations are fairly well-known and easy to spot in the papers
Hi Quinn! Thanks for this comment. Yes, I expect any theory of change for private actors here will run through policy advocacy. This both provides massive leverage (by using government funds) and is just necessary given the subject matter.
I wouldn't say it stops at a white paper -- one could organize track II dialogues to discuss the systems, lobby government, give policy briefings at a think tank, hold side events at international security conferences and treaty review conferences, etc.
This could also take the form of advisory roles (I'...
Thanks, David! I really appreciate this comment. One reason I find this left/right framework more intuitive than "prevention, response, and resilience" is that there are right-of-boom interventions that I would classify as "prevention." For example, I think of escalation management after limited first use as "preventing" the largest nuclear wars (especially if we think such a war poses qualitatively different problems).
Your cost-effectiveness models are very helpful, and I plan to cite them in the bigger project :)
Thanks for the kind comment, Stephen! You're right I phrased that wrong -- it is about tractability, not probability. I agree with you that the tractability of escalation control is probably the biggest issue here, but I also think we should expect low-hanging fruit given the relative neglectedness. There are a couple of concrete projects that I am/would be be excited about:
Hi ParetoPrinciple! I appreciate your engaging with the document :)
I quote Schelling throughout (and think he actually makes some of the points you hint at more clearly in The Strategy of Conflict — eg the quote in footnote 61 here). You’re definitely right that no hotlines discussion would be complete without this!
Thanks, will! Really appreciate this comment :)
Credit for the title actually goes to my colleagues on the FP research team (I believe it was Tom or Johannes who first came up with it).
Hi Rani, it’s great to see the report out. It’s good to have this clear deep dive on the canonical case. I especially like that it points to some attributes of track II dialogues that we should pay special attention to when evaluating them as potential interventions. Great work!
Thanks for writing this! I think it's great. Reminds me of another wild animal metaphor about high-stakes decision-making under uncertainty -- Reagan's 1984 "Bear in the Woods" campaign ad:
There is a bear in the woods. For some people, the bear is easy to see. Others don't see it at all. Some people say the bear is tame. Others say it's vicious and dangerous. Since no one can really be sure who's right, isn't it smart to be as strong as the bear -- if there is a bear?
I think that kind of reasoning is helpful when communicating about GCRs and X-risks.
Really enjoyed reading this and learned a lot. Thank you for writing it! I’m especially intrigued by the proposal for regional alliances in table 6 — including the added bit about expansionist regional powers in the co-benefits column of the linked supplemental version of the table.
I was curious about one part of the paper on volcanic eruptions. You wrote that eg “Indonesia harbours many of the world’s large volcanoes from which an ASRS could originate (eg, Toba and Tambora eruptions).” Just eyeballing maps of the biggest known volcanoes, the overlap with...
Thank you! I also really struggle with the clock metaphor. It seems to have just gotten locked in as the Bulletin took off in the early Cold War. The time bomb is a great suggestion — it communicates the idea much better
Thanks for engaging so closely with the report! I really appreciate this comment.
Agreed on the weapon speed vs. decision speed distinction — the physical limits to the speed of war are real. I do think, however, that flash wars can make non-flash wars more likely (eg cyber flash war unintentionally intrudes on NC3 system components, that gets misinterpreted as preparation for a first strike, etc.). I should have probably spelled that out more clearly in the report.
I think we actually agree on the broader point — it is possible to leverage autonomous system...
Hi Kevin,
Thank you for your comment and thanks for reading :)
The key question for us is not “what is autonomy?” — that’s bogged down the UN debates for years — but rather “what are the systemic risks of certain military AI applications, including a spectrum of autonomous capabilities?” I think many systems around today are better thought of as closer to “automated” than truly “autonomous,” as I mention in the report, but again, I think that binary distinctions like that are less salient than many people think. What we care about is the multi-dimensional pr...
Hi Haydn,
That’s a great point. I think you’re right — I should have dug a bit deeper on how the private sector fits into this.
I think cyber is an example where the private sector has really helped to lead — like Microsoft’s involvement at the UN debates, the Paris Call, the Cybersecurity Tech Accord, and others — and maybe that’s an example of how industry stakeholders can be engaged.
I also think that TEVV-related norms and confidence building measures would probably involve leading companies.
I still broadly thinking that states are the lever to target at ...
Thank you for the reply! I definitely didn’t mean to mischaracterize your opinions on that case :)
Agreed, a project like that would be great. Another point in favor of your argument that this is a dynamic to watch out for on AI competition is if verifying claims of superiority is harder for software (along the lines of Missy Cummings’s “The AI That Wasn’t There” https://tnsr.org/roundtable/policy-roundtable-artificial-intelligence-and-international-security/#essay2). That seems especially vulnerable to misperceptions
Hi Haydn,
This is awesome! Thank you for writing and posting it. I especially liked the description of the atmosphere at RAND, and big +1 on the secrecy heuristic being a possibly big problem.[1] Some people think it helps explain intelligence analysts' underperformance in the forecasting tournaments, and I think there might be something to that explanation.
We have a report on autonomous weapons systems and military AI applications coming out soon (hopefully later today) that gets into the issue of capability (mis)perception in arms races too, an...
Thanks for the kind words Christian - I'm looking forward to reading that report, it sounds fascinating.
I agree with your first point - I say "They were arguably right, ex ante, to advocate for and participate in a project to deter the Nazi use of nuclear weapons." Actions in 1939-42 or around 1957-1959 are defensible. However, I think this highlights 1) accurate information in 1942-3 (and 1957) would have been useful and 2) when they found out the accurate information (in 1944 and 1961) , its very interesting that it didn't stop the arms buildup.
The quest...
Hi Fin!
This is great. Thank you for writing it up and posting it! I gave it a strong upvote.
(TLDR for what follows: I think this is very neglected, but I’m highly uncertain about tractability of formal treaty-based regulation)
As you know, I did some space policy-related work at a think tank about a year ago, and one of the things that surprised us most is how neglected the issue is — there are only a handful of organizations seriously working on it, and very few of them are the kinds of well-connected and -respected think tanks that actually influence poli...
Thank you for writing this overview! I think it's very useful. A few notes on the famous "30%" claim:
One under-appreciated takeaway that you hint at i...
Experimental Wargames for Great Power War and Biological Warfare
Biorisk and Recovery from Catastrophe, Epistemic Institutions
This is a proposal to fund a series of "experimental wargames," on great power war and biological warfare. Wargames have long been a standard tool of think tanks, the military, and the academic IR world since the early Cold War. Until recently, however, these games were largely used to uncover unknown unknowns and help with scenario planning. Most such games continue to be unscientific exercises. Recent work on "experimental wa...
Creative Arms Control
Biorisk and Recovery from Catastrophe
This is a proposal to fund research efforts on "creative arms control," or non-treaty-based international governance mechanisms. Traditional arms control -- formal treaty-based international agreements -- has fallen out of favor among some states, to the extent that some prominent policymakers have asked whether we've reached "The End of Arms Control."[1] Treaties are difficult to negotiate and may be poorly suited to some fast-moving issues like autonomous weapons, synthetic biology, and cyber...
A Project Candor for Global Catastrophic Risks
Biorisk and Recovery from Catastrophe, Values and Reflective Processes, Effective Altruism
This is a proposal to fund a large-scale public communications project on global catastrophic risks (GCRs), modeled on the Eisenhower administration's Project Candor. Project Candor was a Cold War public relations campaign to "inform the public of the realities of the 'Age of Peril'" (see Unclassified 1953 Memo from Eisenhower Library). Policymakers were concerned that the public did not yet understand that the threa...
FWIW, @Rosie_Bettle and I also found this surprising and intriguing when looking into far-UVC, and ended up recommending that philanthropists focus more on "wavelength-agnostic" interventions (e.g. policy advocacy for GUV generally)