All of christian.r's Comments + Replies

Islands, nuclear winter, and trade disruption as a human existential risk factor

Really enjoyed reading this and learned a lot. Thank you for writing it! I’m especially intrigued by the proposal for regional alliances in table 6 — including the added bit about expansionist regional powers in the co-benefits column of the linked supplemental version of the table.

I was curious about one part of the paper on volcanic eruptions. You wrote that eg “Indonesia harbours many of the world’s large volcanoes from which an ASRS could originate (eg, Toba and Tambora eruptions).” Just eyeballing maps of the biggest known volcanoes, the overlap with... (read more)

2Matt Boyd2d
Hi Christian, thanks for your thoughts. You're right to note that islands like Iceland, Indonesia, NZ, etc are also where there's a lot of volcanic activity. Mike Cassidy and Lara Mani briefly summarize potential ash damage in their post on supervolcanoes here [https://forum.effectivealtruism.org/posts/jJDuEhLpF7tEThAHy/on-the-assessment-of-volcanic-eruptions-as-global] (see the table on effects). Basically there could be severe impacts on agriculture and infrastructure. I think the main lesson is that at least two prepared islands would be good. In different hemispheres. That first line of redundancy is probably the most important (also in case one is a target in nuclear war, eg NZ is probably susceptible to an EMP directed at Australia).
Building a Better Doomsday Clock

Thank you! I also really struggle with the clock metaphor. It seems to have just gotten locked in as the Bulletin took off in the early Cold War. The time bomb is a great suggestion — it communicates the idea much better

Risks from Autonomous Weapon Systems and Military AI

Thanks for engaging so closely with the report! I really appreciate this comment.

Agreed on the weapon speed vs. decision speed distinction — the physical limits to the speed of war are real. I do think, however, that flash wars can make non-flash wars more likely (eg cyber flash war unintentionally intrudes on NC3 system components, that gets misinterpreted as preparation for a first strike, etc.). I should have probably spelled that out more clearly in the report.

I think we actually agree on the broader point — it is possible to leverage autonomous system... (read more)

Risks from Autonomous Weapon Systems and Military AI

Hi Kevin,

Thank you for your comment and thanks for reading :)

The key question for us is not “what is autonomy?” — that’s bogged down the UN debates for years — but rather “what are the systemic risks of certain military AI applications, including a spectrum of autonomous capabilities?” I think many systems around today are better thought of as closer to “automated” than truly “autonomous,” as I mention in the report, but again, I think that binary distinctions like that are less salient than many people think. What we care about is the multi-dimensional pr... (read more)

Risks from Autonomous Weapon Systems and Military AI

Hi Haydn,

That’s a great point. I think you’re right — I should have dug a bit deeper on how the private sector fits into this.

I think cyber is an example where the private sector has really helped to lead — like Microsoft’s involvement at the UN debates, the Paris Call, the Cybersecurity Tech Accord, and others — and maybe that’s an example of how industry stakeholders can be engaged.

I also think that TEVV-related norms and confidence building measures would probably involve leading companies.

I still broadly thinking that states are the lever to target at ... (read more)

3HaydnBelfield3mo
I don't think its a hole at all, I think its quite reasonable to focus on major states. The private sector approach is a different one with a whole different set of actors/interventions/literature - completely makes sense that its outside the scope of this report. I was just doing classic whatabouterism, wondering about your take on a related but seperate approach. Btw I completely agree with you about cluster munitions.
Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg

Thank you for the reply! I definitely didn’t mean to mischaracterize your opinions on that case :)

Agreed, a project like that would be great. Another point in favor of your argument that this is a dynamic to watch out for on AI competition is if verifying claims of superiority is harder for software (along the lines of Missy Cummings’s “The AI That Wasn’t There” https://tnsr.org/roundtable/policy-roundtable-artificial-intelligence-and-international-security/#essay2). That seems especially vulnerable to misperceptions

Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg

Hi Haydn,

This is awesome! Thank you for writing and posting it. I especially liked the description of the atmosphere at RAND, and big +1 on the secrecy heuristic being a possibly big problem.[1] Some people think it helps explain intelligence analysts' underperformance in the forecasting tournaments, and I think there might be something to that explanation. 

We have a report on autonomous weapons systems and military AI applications coming out soon (hopefully later today) that gets into the issue of capability (mis)perception in arms races too, an... (read more)

Thanks for the kind words Christian - I'm looking forward to reading that report, it sounds fascinating.

I agree with your first point - I say "They were arguably right, ex ante, to advocate for and participate in a project to deter the Nazi use of nuclear weapons." Actions in 1939-42 or around 1957-1959 are defensible. However, I think this highlights 1) accurate information in 1942-3 (and 1957) would have been useful and 2) when they found out the accurate information (in 1944 and 1961) , its very interesting that it didn't stop the arms buildup.

The quest... (read more)

Space governance - problem profile

Hi Fin!

This is great. Thank you for writing it up and posting it! I gave it a strong upvote.

(TLDR for what follows: I think this is very neglected, but I’m highly uncertain about tractability of formal treaty-based regulation)

As you know, I did some space policy-related work at a think tank about a year ago, and one of the things that surprised us most is how neglected the issue is — there are only a handful of organizations seriously working on it, and very few of them are the kinds of well-connected and -respected think tanks that actually influence poli... (read more)

Comparing top forecasters and domain experts

Thanks Gavin!  That makes sense on how you view this and (3). 

Comparing top forecasters and domain experts

Thank you for writing this overview! I think it's very useful. A few notes on the famous "30%" claim:

  • Part of the problem with fully understanding the performance of IC analysts is that much of the information about the tournaments and the ICPM is classified.
  • What originally happened is that someone leaked info about ACE to David Ignatius, who then published it in his column. (The IC never denied the claim.[1]) The document you cite is part of a case study by MITRE that's been approved for public release.

One under-appreciated takeaway that you hint at i... (read more)

7Gavin5mo
This is extremely helpful and a deep cut - thanks Christian. I've linked to it in the post. Yeah, our read of Goldstein isn't much evidence against (3), we're just resetting the table, since previously people used it as strong evidence for (3).
The Future Fund’s Project Ideas Competition

Experimental Wargames for Great Power War and Biological Warfare

Biorisk and Recovery from Catastrophe, Epistemic Institutions

This is a proposal to fund a series of "experimental wargames," on great power war and biological warfare. Wargames have long been a standard tool of think tanks, the military, and the academic IR world since the early Cold War. Until recently, however, these games were largely used to uncover unknown unknowns  and help with scenario planning. Most such games continue to be unscientific exercises. Recent work on "experimental wa... (read more)

The Future Fund’s Project Ideas Competition

Creative Arms Control

Biorisk and Recovery from Catastrophe

This is a proposal to fund research efforts on "creative arms control," or non-treaty-based international governance mechanisms. Traditional arms control -- formal treaty-based international agreements -- has fallen out of favor among some states, to the extent that some prominent policymakers have asked whether we've reached "The End of Arms Control."[1] Treaties are difficult to negotiate and may be poorly suited to some fast-moving issues like autonomous weapons, synthetic biology, and cyber... (read more)

The Future Fund’s Project Ideas Competition

A Project Candor for Global Catastrophic Risks

Biorisk and Recovery from Catastrophe, Values and Reflective Processes, Effective Altruism

This is a proposal to fund a large-scale public communications project on global catastrophic risks (GCRs), modeled on the Eisenhower administration's Project Candor. Project Candor was a Cold War  public relations campaign to "inform the public of the realities of the 'Age of Peril'" (see Unclassified 1953 Memo from Eisenhower Library). Policymakers were concerned that the public did not yet understand that the threa... (read more)