All of christian.r's Comments + Replies

FWIW, @Rosie_Bettle and I also found this surprising and intriguing when looking into far-UVC, and  ended up recommending that philanthropists focus more on "wavelength-agnostic" interventions (e.g. policy advocacy for GUV generally)

2
Max Görlitz
1mo
That sounds reasonable! Thanks for adding your impression.

Thanks for writing this! I like the post a lot. This heuristic is one of the criteria we use to evaluate bio charities at Founders Pledge (see the "Prioritize Pathogen- and Threat-Agnostic Approaches" section starting on p. 87 of my Founders Pledge bio report). 

One reason that I didn't see listed as one of your premises is just the general point about hedging against uncertainty: we're just very uncertain about what a future pandemic might look like and where it will come from, and the threat landscape only becomes more complex with technological adva... (read more)

1
JasperGo
1mo
I'm personally not that worried about substitution risks. Roughly: The deterrence aspect is strongest for low-resource threat actors and—from a tail-risk perspective—bio is probably the most dangerous thing they can utilize, with that pesky self-replication and whatnot. 
2
Max Görlitz
1mo
Very useful comment, thanks!  I fully agree with this; I think this was an implicit premise of mine that I failed to point out explicitly.  Great point that I actually haven't considered so far. I would need to think about this more before giving my opinion. It seems really context-dependent, though, and hard to determine with any confidence. Also, the Maginot line analogy is cool; I hadn't seen that before. (I guess I really should read more of your report 🙂)

Hi Ulrik, thanks for this comment! Very much agreed on the communications failures around aerosolized transmission. I wonder how much the mechanics of transmission would enter into a policy discussion around GUV (rather than a simplified “These lights can help suppress outbreaks.”)

An interesting quote relevant to bio attention hazards from an old CNAS report on Aum Shinrikyo

"This unbroken string of failures with botulinum and anthrax eventually convinced the group that making biological weapons was more difficult than Endo [Seiichi Endo, who ran the BW program] was acknowledging. Asahara [Shoko Asahara, the founder/leader of the group] speculated that American comments on the risk of biological weapons were intended to delude would-be terrorists into pursuing this path."

Footnote source in the report: "Interview with Fumihiro ... (read more)

4
gwern
4mo
Good report overall on tacit knowledge & biowarfare. This is relevant to the discussion over LLM risks: the Aum Shinrikyo chemist could make a lot of progress by reading papers and figuring out his problems as he went, but the bacteriologist couldn't figure out his issues for what seems like what had been a viable plan to weaponize & mass-produce anthrax but where lack of feedback led it to fail. Which does sound like something that a superhumanly-knowledgeable (but not necessarily that intelligent) LLM could help a lot with simply by pattern-matching and making lists of suggestions for things that are to the human 'unknown unknowns'.

Thanks for this post! I'm not sure cyber is a strong example here. Given how little is known publicly about the extent and character of offensive cyber operations, I don't feel that I'm able to assess the balance of offense and defense very well

Longview’s nuclear weapons fund and Founders Pledge’s Global Catastrophic Risks Fund (disclaimer: I manage the GCR Fund). We recently published a long report on nuclear war and philanthropy that may be useful, too. Hope this helps!

1
Luke Eure
5mo
thank you! Exactly what I was looking for

Just saw reporting that one of the goals for the Biden-Xi meeting today is "Being able to pick up the phone and talk to one another if there’s a crisis. Being able to make sure our militaries still have contact with one another." 

I had a Forum post about this earlier this year (with my favorite title) Call Me, Maybe? Hotlines and Global Catastrophic Risks with a section on U.S.-China crisis comms, in case it's of interest:

"For example, after the establishment of an initial presidential-level communications link in 1997, Chinese leaders did not respond

... (read more)

There is currently just one track 2/track 1.5 diplomatic dialogue between the U.S. and China that focuses on strategic nuclear issues. ~$250K/year is roughly my estimate for what it would cost to start one more

China and India. Then generally excited about leveraging U.S. alliance dynamics and building global policy advocacy networks, especially for risks from technologies that seem to be becoming cheaper and more accessible, e.g. in synthetic biology

I think in general, it's a trade-off along the lines of uncertainty and leverage -- GCR interventions pull bigger levers on bigger problems, but in high-uncertainty environments with little feedback. I think evaluations in GCR should probably be framed in terms of relative impact, whereas we can more easily evaluate GHD in terms of absolute impact.

This is not what you asked about, but I generally view GCR interventions as highly relevant to current-generation and near-term health and wellbeing. When we launched the Global Catastrophic Risks Fund last year,... (read more)

I think: read a lot, interview a lot of people who are smarter (or more informed, connected, etc.) than I am about the problem, snowball sample from there, and then write a lot.

I wonder if FP's research director, @Matt_Lerner, has a better answer for me, or for FP researchers in general

Thanks for the question! In 3 years, this might include:

  • Overall, "right of boom" interventions make up a larger fraction of funding (perhaps 1/4), even as total funding grows by an order of magnitude
  • There are major public and private efforts to understand escalation management (conventional and nuclear), war limitation, and war termination in the three-party world.
  • Much more research and investment in "civil defense" and resilience interventions across the board, not just nuclear. So that might include food security, bunkers, transmission-blocking intervent
... (read more)
5
jackva
7mo
I think the idea of an energy descent is extremely far outside the expert consensus on the topic, as Robin discusses at length in his replies to that post. This is nothing we need to worry about.

A few that come to mind:

  • Risk-general/threat-agnostic/all-hazards risk-mitigation (see e.g. Global Shield and the GCRMA)
  • "Civil defense" interventions and resilience broadly defined
  • Intrawar escalation management
  • Protracted great power war

Definitely difficult. I think my colleagues' work at Founders Pledge (e.g. How to Evaluate Relative Impact in High-Uncertainty Contexts) and iterating on "impact multipliers" to make ever-more-rigorous comparative judgments is the most promising path forward. I'm not sure that this is a problem unique to GCRs or climate. A more high-leverage risk-tolerant approach to global health and development faces the same issues, right?

Maybe, but I'm not really qualified to say much about this. I do think we need to think beyond New START (which was going to need a follow-on agreement anyway), and beyond arms control as "formal, legally binding, ratified treaties." I think some nonprofits and funders have been playing a very reactive kind of whack-a-mole game when it comes to nuclear security, reacting to the latest news about new weapon systems, doctrinal changes, and current events. Instead, are there ways to think bigger about arms control, to make some of these ideas more politically... (read more)

I would say cash-constrained. There are plenty of good opportunities out there, and a field of smart scholars, advocates, and practitioners with transferable skills. Just need a lot more money

Thanks for the question, Johannes! My best elevator pitch is roughly an ITN case that starts with neglectedness:

The biggest funder in nuclear security just withdrew from the field, leaving only ~$32 million/year in philanthropic funding. That's a third of the budget of Oppenheimer, and several orders of magnitude smaller than philanthropic spending on climate change. This is a huge blow to a field that's already small and aging, and would leave defense contractors and bureaucrats to determine nuclear policy. But it's also an opportunity to reshape the fiel... (read more)

1
Angelina Li
7mo
Huh! Is this a specific live funding opportunity you are tracking, or just an example of a specific outcome that philanthropic nuclear funding has generated before? Curious if you can elaborate, if not too sensitive!

(It's a long elevator ride)

Hi Angelina! Thanks for the great question! There are several government actors, like DTRA, the Minerva Research Initiative, and the STRATCOM Academic Alliance, that play an important role in the non-governmental nuclear security space, including with funding. Then there are the National Labs as well as FFRDCs and UARCs that receive government funding and often work on relevant issues. Then there are defense contractors that will provide funding to think tanks and organizations that just so happen to support the latest weapons... (read more)

This does help answer the question, but it conflates extinction risk with existential risk, which is I think a big mistake in general. This chapter in How Worlds Collapse does a nice job of explaining this:

"Currently, existential risk scholars tend to focus on events and processes for which a fairly direct, simple story can be told about how they could lead to extinction. [...] However, if there is a substantial probability that collapse could destroy humanity's longterm potential [including by recovery with bad values], this should change one's view of ca

... (read more)
2
Nathan Young
8mo
Okay but I just think that's not that common a view. If you leave 1,000 - 10,000 humans alive, the longterm future is probably fine. So that's the existential risk reduction down by 60 - 90%

For anyone who is interested, Founders Pledge has a longer report on this (with a discussion of funding constraints as well as funding ideas that could absorb a lot of money), as well as some related work on specific funding opportunities like crisis communications hotlines.

Arturo, thank you for this comment and the very kind words! 

I really like your point about beneficially "dual-use" interventions, and that we might want to look for right-of-boom interventions with near-term positive externalities. I think that's useful for market-shaping and for political tractability (no one likes to invest in something that their successor will take credit for) -- and it's just a good thing to do!

It feels similar to the point that bio-risk preparedness has many current-gen benefits, like Kevin Esvelt's point here that "Crucially, a... (read more)

Johannes, as he often does, said it better than I could! 

Thanks, Vasco. I totally forgot to reply to your comment on my previous post -- my apologies!

I think you raise a good general point that we'd expect societal spending after a catastrophe to be high, especially given the funder behavior we see for newsworthy humanitarian disasters. 

There are a few related considerations here, all of them touching on the issue you also raise: "Coming up with good interventions in little time may be harder."

  1. Fast-Moving Catastrophes -- I would expect many nuclear wars to escalate very quickly, far outpacing the timelines
... (read more)
2
Vasco Grilo
9mo
Thanks for elaborating! I can see that right-of-boom spending before the nuclear war is most likely more effective than after it. To clarify, by "all of this" I meant not just considerations about whether it is better to spend before or after the nuclear war, but also about the expected spending on left- and right-of-boom interventions. I am thinking along these lines: * Left-of-boom spending is currently at 30 M$/year. * The expected right-of-boom spending is 1 G$/year, for a probability of 0.1 %/year of a global nuclear war leading to 1 T$ being invested in right-of-boom spending. * Right-of-boom spending before nuclear war is 30 times as effective as after the nuclear war, for the reasons you mentioned. * So the expected right-of-boom spending (adjusted for effectiveness) is equivalent to 30 M$/year (= 1000/30) of right-of-boom spending before the nuclear. * Therefore it is not obvious to me that right-of-boom spending before nuclear war is way more neglected than left-of-boom spending (I got 30 M$/year for both above), even if right-of-boom spending before the nuclear war is most likely more effective than after it. Basically, what I am saying is that, even if right-of-boom spending after the nuclear war is much less effective, it would be so large that the expected right-of-boom spending adjusted for effectiveness could still be comparable with current left-of-boom spending. Does this make sense? Note I am not claiming that left-of-boom spending is more/less effective than right-of-boom spending before nuclear war. I am just suggesting that left- and right-of-boom spending may not have super different levels of neglectedness.

Just a note that the Likert scale in the poll is not symmetrical ("Agree" vs. "Strongly Disagree")

Agree with Johannes here on the bias in much of the nuclear winter work (and I say that as someone who thinks catastrophic risk from nuclear war is under-appreciated). The political motivations are fairly well-known and easy to spot in the papers

2
Vasco Grilo
1y
Thanks for the feedback, Christian! I find it interesting that, despite concerns about the extent to which Roblock's group is truth-seeking, Open Philanthropy granted it 2.98 M$ in 2017, and 3.00 M$ in 2020. This does not mean Roblock's estimates are unbiased, but, if they were systemically off by multiple orders of magnitude, Open Philanthropy would presumably not have made those grants. I do not have a view on this, because I have not looked into the nuclear winter literature (besides very quickly skimming some articles).

Hi Quinn! Thanks for this comment. Yes, I expect any theory of change for private actors here will run through policy advocacy. This both provides massive leverage  (by using government funds) and is just necessary given the subject matter. 

I wouldn't say it stops at a white paper -- one could organize track II dialogues to discuss the systems, lobby government, give policy briefings at a think tank, hold side events at international security conferences and treaty review conferences, etc.  

This could also take the form of advisory roles (I'... (read more)

Thanks, David! I really appreciate this comment. One reason I find this left/right framework more intuitive than "prevention, response, and resilience" is that there are right-of-boom interventions that I would classify as "prevention." For example, I think of escalation management after limited first use as "preventing" the largest nuclear wars (especially if we think such a war poses qualitatively different problems). 

Your cost-effectiveness models are very helpful, and I plan to cite them in the bigger project :) 

Thanks for the kind comment, Stephen! You're right I phrased that wrong -- it is about tractability, not probability. I agree with you that the tractability of escalation control is probably the biggest issue here, but I also think we should expect low-hanging fruit given the relative neglectedness. There are a couple of concrete projects that I am/would be be excited about:

  • Escalation management with North Korea --  How can we keep nuclear war limited with North Korea? As I understand it, the problem of deterrence  after  DPRK first use has
... (read more)

Hi ParetoPrinciple! I appreciate your engaging with the document :)

I quote Schelling throughout (and think he actually makes some of the points you hint at more clearly in The Strategy of Conflict — eg the quote in footnote 61 here). You’re definitely right that no hotlines discussion would be complete without this!

1
ParetoPrinciple
1y

Thanks, will! Really appreciate this comment :)

Credit for the title actually goes to my colleagues on the FP research team (I believe it was Tom or Johannes who first came up with it). 

Hi Rani, it’s great to see the report out. It’s good to have this clear deep dive on the canonical case. I especially like that it points to some attributes of track II dialogues that we should pay special attention to when evaluating them as potential interventions. Great work!

Thanks for writing this! I think it's great. Reminds me of another wild animal metaphor about high-stakes decision-making under uncertainty -- Reagan's 1984 "Bear in the Woods" campaign ad:

There is a bear in the woods. For some people, the bear is easy to see. Others don't see it at all. Some people say the bear is tame. Others say it's vicious and dangerous. Since no one can really be sure who's right, isn't it smart to be as strong as the bear -- if there is a bear?

I think that kind of reasoning is helpful when communicating about GCRs and X-risks.

Really enjoyed reading this and learned a lot. Thank you for writing it! I’m especially intrigued by the proposal for regional alliances in table 6 — including the added bit about expansionist regional powers in the co-benefits column of the linked supplemental version of the table.

I was curious about one part of the paper on volcanic eruptions. You wrote that eg “Indonesia harbours many of the world’s large volcanoes from which an ASRS could originate (eg, Toba and Tambora eruptions).” Just eyeballing maps of the biggest known volcanoes, the overlap with... (read more)

2
Matt Boyd
2y
Hi Christian, thanks for your thoughts. You're right to note that islands like Iceland, Indonesia, NZ, etc are also where there's a lot of volcanic activity. Mike Cassidy and Lara Mani briefly summarize potential ash damage in their post on supervolcanoes here (see the table on effects). Basically there could be severe impacts on agriculture and infrastructure. I think the main lesson is that at least two prepared islands would be good. In different hemispheres. That first line of redundancy is probably the most important (also in case one is a target in nuclear war, eg NZ is probably susceptible to an EMP directed at Australia). 

Thank you! I also really struggle with the clock metaphor. It seems to have just gotten locked in as the Bulletin took off in the early Cold War. The time bomb is a great suggestion — it communicates the idea much better

Thanks for engaging so closely with the report! I really appreciate this comment.

Agreed on the weapon speed vs. decision speed distinction — the physical limits to the speed of war are real. I do think, however, that flash wars can make non-flash wars more likely (eg cyber flash war unintentionally intrudes on NC3 system components, that gets misinterpreted as preparation for a first strike, etc.). I should have probably spelled that out more clearly in the report.

I think we actually agree on the broader point — it is possible to leverage autonomous system... (read more)

Hi Kevin,

Thank you for your comment and thanks for reading :)

The key question for us is not “what is autonomy?” — that’s bogged down the UN debates for years — but rather “what are the systemic risks of certain military AI applications, including a spectrum of autonomous capabilities?” I think many systems around today are better thought of as closer to “automated” than truly “autonomous,” as I mention in the report, but again, I think that binary distinctions like that are less salient than many people think. What we care about is the multi-dimensional pr... (read more)

Hi Haydn,

That’s a great point. I think you’re right — I should have dug a bit deeper on how the private sector fits into this.

I think cyber is an example where the private sector has really helped to lead — like Microsoft’s involvement at the UN debates, the Paris Call, the Cybersecurity Tech Accord, and others — and maybe that’s an example of how industry stakeholders can be engaged.

I also think that TEVV-related norms and confidence building measures would probably involve leading companies.

I still broadly thinking that states are the lever to target at ... (read more)

-1
HaydnBelfield
2y
I don't think its a hole at all, I think its quite reasonable to focus on major states. The private sector approach is a different one with a whole different set of actors/interventions/literature - completely makes sense that its outside the scope of this report. I was just doing classic whatabouterism, wondering about your take on a related but seperate approach. Btw I completely agree with you about cluster munitions. 

Thank you for the reply! I definitely didn’t mean to mischaracterize your opinions on that case :)

Agreed, a project like that would be great. Another point in favor of your argument that this is a dynamic to watch out for on AI competition is if verifying claims of superiority is harder for software (along the lines of Missy Cummings’s “The AI That Wasn’t There” https://tnsr.org/roundtable/policy-roundtable-artificial-intelligence-and-international-security/#essay2). That seems especially vulnerable to misperceptions

Hi Haydn,

This is awesome! Thank you for writing and posting it. I especially liked the description of the atmosphere at RAND, and big +1 on the secrecy heuristic being a possibly big problem.[1] Some people think it helps explain intelligence analysts' underperformance in the forecasting tournaments, and I think there might be something to that explanation. 

We have a report on autonomous weapons systems and military AI applications coming out soon (hopefully later today) that gets into the issue of capability (mis)perception in arms races too, an... (read more)

Thanks for the kind words Christian - I'm looking forward to reading that report, it sounds fascinating.

I agree with your first point - I say "They were arguably right, ex ante, to advocate for and participate in a project to deter the Nazi use of nuclear weapons." Actions in 1939-42 or around 1957-1959 are defensible. However, I think this highlights 1) accurate information in 1942-3 (and 1957) would have been useful and 2) when they found out the accurate information (in 1944 and 1961) , its very interesting that it didn't stop the arms buildup.

The quest... (read more)

Hi Fin!

This is great. Thank you for writing it up and posting it! I gave it a strong upvote.

(TLDR for what follows: I think this is very neglected, but I’m highly uncertain about tractability of formal treaty-based regulation)

As you know, I did some space policy-related work at a think tank about a year ago, and one of the things that surprised us most is how neglected the issue is — there are only a handful of organizations seriously working on it, and very few of them are the kinds of well-connected and -respected think tanks that actually influence poli... (read more)

Thanks Gavin!  That makes sense on how you view this and (3). 

Thank you for writing this overview! I think it's very useful. A few notes on the famous "30%" claim:

  • Part of the problem with fully understanding the performance of IC analysts is that much of the information about the tournaments and the ICPM is classified.
  • What originally happened is that someone leaked info about ACE to David Ignatius, who then published it in his column. (The IC never denied the claim.[1]) The document you cite is part of a case study by MITRE that's been approved for public release.

One under-appreciated takeaway that you hint at i... (read more)

7
Gavin
2y
This is extremely helpful and a deep cut - thanks Christian. I've linked to it in the post. Yeah, our read of Goldstein isn't much evidence against (3), we're just resetting the table, since previously people used it as strong evidence for (3).

Experimental Wargames for Great Power War and Biological Warfare

Biorisk and Recovery from Catastrophe, Epistemic Institutions

This is a proposal to fund a series of "experimental wargames," on great power war and biological warfare. Wargames have long been a standard tool of think tanks, the military, and the academic IR world since the early Cold War. Until recently, however, these games were largely used to uncover unknown unknowns  and help with scenario planning. Most such games continue to be unscientific exercises. Recent work on "experimental wa... (read more)

Creative Arms Control

Biorisk and Recovery from Catastrophe

This is a proposal to fund research efforts on "creative arms control," or non-treaty-based international governance mechanisms. Traditional arms control -- formal treaty-based international agreements -- has fallen out of favor among some states, to the extent that some prominent policymakers have asked whether we've reached "The End of Arms Control."[1] Treaties are difficult to negotiate and may be poorly suited to some fast-moving issues like autonomous weapons, synthetic biology, and cyber... (read more)

A Project Candor for Global Catastrophic Risks

Biorisk and Recovery from Catastrophe, Values and Reflective Processes, Effective Altruism

This is a proposal to fund a large-scale public communications project on global catastrophic risks (GCRs), modeled on the Eisenhower administration's Project Candor. Project Candor was a Cold War  public relations campaign to "inform the public of the realities of the 'Age of Peril'" (see Unclassified 1953 Memo from Eisenhower Library). Policymakers were concerned that the public did not yet understand that the threa... (read more)

Load more