christian.r

Senior Researcher @ Founders Pledge
377Joined Mar 2022

Bio

Christian Ruhl, Founders Pledge

I am a Senior Researcher at Founders Pledge, where I work on global catastrophic risks. Previously, I was the program manager for Perry World House's research program on The Future of the Global Order: Power, Technology, and Governance. I'm interested in the national security implications of AI, cyber norms, nuclear risks, space policy, probabilistic forecasting and its applications, histories of science, and global governance. Please feel free to reach out to me with questions or just to connect!

Comments
16

Topic Contributions
1

Hi Rani, it’s great to see the report out. It’s good to have this clear deep dive on the canonical case. I especially like that it points to some attributes of track II dialogues that we should pay special attention to when evaluating them as potential interventions. Great work!

Thanks for writing this! I think it's great. Reminds me of another wild animal metaphor about high-stakes decision-making under uncertainty -- Reagan's 1984 "Bear in the Woods" campaign ad:

There is a bear in the woods. For some people, the bear is easy to see. Others don't see it at all. Some people say the bear is tame. Others say it's vicious and dangerous. Since no one can really be sure who's right, isn't it smart to be as strong as the bear -- if there is a bear?

I think that kind of reasoning is helpful when communicating about GCRs and X-risks.

Really enjoyed reading this and learned a lot. Thank you for writing it! I’m especially intrigued by the proposal for regional alliances in table 6 — including the added bit about expansionist regional powers in the co-benefits column of the linked supplemental version of the table.

I was curious about one part of the paper on volcanic eruptions. You wrote that eg “Indonesia harbours many of the world’s large volcanoes from which an ASRS could originate (eg, Toba and Tambora eruptions).” Just eyeballing maps of the biggest known volcanoes, the overlap with some island refuges seems concerning. Do we know what ash deposit models say about how much these places would get covered in ash for various kinds of volcanic eruptions in the region and what this would mean for infrastructure and agriculture?

Thanks!

Thank you! I also really struggle with the clock metaphor. It seems to have just gotten locked in as the Bulletin took off in the early Cold War. The time bomb is a great suggestion — it communicates the idea much better

Thanks for engaging so closely with the report! I really appreciate this comment.

Agreed on the weapon speed vs. decision speed distinction — the physical limits to the speed of war are real. I do think, however, that flash wars can make non-flash wars more likely (eg cyber flash war unintentionally intrudes on NC3 system components, that gets misinterpreted as preparation for a first strike, etc.). I should have probably spelled that out more clearly in the report.

I think we actually agree on the broader point — it is possible to leverage autonomous systems and AI to make the world safer, to lengthen decision-making windows, to make early warning and decision-support systems more reliable.

But I don’t think that’s a given. It depends on good choices. The key questions for us are therefore: How do we shape the future adoption of these systems to make sure that’s the world we’re in? How can we trust that our adversaries are doing the same thing? How can we make sure that our confidence in some of these systems is well-calibrated to their capabilities? That’s partly why a ban probably isn’t the right framing.

I also think this exchange illustrates why we need more research on the strategic stability questions.

Thanks again for the comment!

Hi Kevin,

Thank you for your comment and thanks for reading :)

The key question for us is not “what is autonomy?” — that’s bogged down the UN debates for years — but rather “what are the systemic risks of certain military AI applications, including a spectrum of autonomous capabilities?” I think many systems around today are better thought of as closer to “automated” than truly “autonomous,” as I mention in the report, but again, I think that binary distinctions like that are less salient than many people think. What we care about is the multi-dimensional problem of more and more autonomy in more and more systems, and how that can destabilize the international system.

I agree with your point that it’s a tricky definitional problem. In point 3 under the section on the “Killer Robot Ban” in the report, one of the key issues there is “The line between autonomous and automated systems is blurry.” I think you’re pointing to a key problem with how people often think about this issue.

I’m sorry I won’t be able to give a satisfying answer about “ethical norms” as it’s a bit outside the purview of the report, which focuses more on strategic stability and GCRs. (I will say that I think the idea of “human in the loop” is not the solution it’s often made out to be, given some of the issues with speed and cognitive biases discussed in the report). There are some people doing good work on related questions in international humanitarian law though that will give a much more interesting answer.

Thanks again!

Hi Haydn,

That’s a great point. I think you’re right — I should have dug a bit deeper on how the private sector fits into this.

I think cyber is an example where the private sector has really helped to lead — like Microsoft’s involvement at the UN debates, the Paris Call, the Cybersecurity Tech Accord, and others — and maybe that’s an example of how industry stakeholders can be engaged.

I also think that TEVV-related norms and confidence building measures would probably involve leading companies.

I still broadly thinking that states are the lever to target at this stage in the problem, given that they would be (or are) driving demand. I am also always a little unsure about using cluster munitions as an example of success — both because I think autonomous weapons are just a different beast in terms of military utility, and of course because of the breaches (including recently).

Thank you again for pointing out that hole in the report!

Thank you for the reply! I definitely didn’t mean to mischaracterize your opinions on that case :)

Agreed, a project like that would be great. Another point in favor of your argument that this is a dynamic to watch out for on AI competition is if verifying claims of superiority is harder for software (along the lines of Missy Cummings’s “The AI That Wasn’t There” https://tnsr.org/roundtable/policy-roundtable-artificial-intelligence-and-international-security/#essay2). That seems especially vulnerable to misperceptions

Hi Haydn,

This is awesome! Thank you for writing and posting it. I especially liked the description of the atmosphere at RAND, and big +1 on the secrecy heuristic being a possibly big problem.[1] Some people think it helps explain intelligence analysts' underperformance in the forecasting tournaments, and I think there might be something to that explanation. 

We have a report on autonomous weapons systems and military AI applications coming out soon (hopefully later today) that gets into the issue of capability (mis)perception in arms races too, and your points on competition with China are well taken.

What I felt was missing from the post was the counterfactual: what if the atomic scientists’ and defense intellectuals’ worst fears about their adversaries had been correct? It’s not hard to imagine. The USSR did seem poised to dominate in rocket capabilities at the time of Sputnik.

I think there’s some hindsight bias going on here. In the face of high uncertainty about an adversary’s intentions and capabilities, it’s not obvious to me that skepticism is the right response. Rather, we should weigh possible outcomes. In the Manhattan Project case, one of those possible outcomes was that a murderous totalitarian regime would be the first to develop nuclear weapons, become a permanent regional hegemon, or worse, a global superpower. I think the atomic scientists’ and U.S. leadership’s decision then was the right one, given their uncertainties at the time.

I think it would be especially interesting to see whether misperception is actually more common historically. But I think there are examples of “racing” where assessments were accurate or even under-confident (as you mention, thermonuclear weapons).

Thanks again for writing this! I think you raise a really important question — when is AI competition “suboptimal”?[2]

  1. ^
  2. ^
Load More