Senior Researcher @ Founders Pledge
846 karmaJoined Mar 2022



Christian Ruhl, Founders Pledge

I am a Senior Researcher at Founders Pledge, where I work on global catastrophic risks. Previously, I was the program manager for Perry World House's research program on The Future of the Global Order: Power, Technology, and Governance. I'm interested in biosecurity, nuclear weapons, the international security implications of AI, probabilistic forecasting and its applications, history and philosophy of science, and global governance. Please feel free to reach out to me with questions or just to connect!


Topic Contributions

Longview’s nuclear weapons fund and Founders Pledge’s Global Catastrophic Risks Fund (disclaimer: I manage the GCR Fund). We recently published a long report on nuclear war and philanthropy that may be useful, too. Hope this helps!

Just saw reporting that one of the goals for the Biden-Xi meeting today is "Being able to pick up the phone and talk to one another if there’s a crisis. Being able to make sure our militaries still have contact with one another." 

I had a Forum post about this earlier this year (with my favorite title) Call Me, Maybe? Hotlines and Global Catastrophic Risks with a section on U.S.-China crisis comms, in case it's of interest:

"For example, after the establishment of an initial presidential-level communications link in 1997, Chinese leaders did not respond to repeated U.S. contact attempts during the 2001 Hainan Island incident. In this incident, Chinese fighter jets got too close to a U.S. spy plane conducting routine operations, and the U.S. plane had to make an emergency landing on Hainan Island. The U.S. plane contained highly classified technology, and the crew destroyed as much of it as they could (allegedly in part by pouring coffee on the equipment) before being captured and interrogated. Throughout the incident, the U.S. attempted to reach Chinese leadership via the hotline, but were unsuccessful, leading U.S. Deputy Secretary of State Richard Armitage to remark that “it seems to be the case that when very, very difficult issues arise, it is sometimes hard to get the Chinese to answer the phone.”

There is currently just one track 2/track 1.5 diplomatic dialogue between the U.S. and China that focuses on strategic nuclear issues. ~$250K/year is roughly my estimate for what it would cost to start one more

China and India. Then generally excited about leveraging U.S. alliance dynamics and building global policy advocacy networks, especially for risks from technologies that seem to be becoming cheaper and more accessible, e.g. in synthetic biology

I think in general, it's a trade-off along the lines of uncertainty and leverage -- GCR interventions pull bigger levers on bigger problems, but in high-uncertainty environments with little feedback. I think evaluations in GCR should probably be framed in terms of relative impact, whereas we can more easily evaluate GHD in terms of absolute impact.

This is not what you asked about, but I generally view GCR interventions as highly relevant to current-generation and near-term health and wellbeing. When we launched the Global Catastrophic Risks Fund last year, we wrote in the prospectus:

The Fund’s grantmaking will take a balanced approach to existential and catastrophic risks. Those who take a longtermist perspective in principle put special weight on existential risks—those that threaten to extinguish or permanently curtail humanity’s potential—even where interventions appear less tractable. Not everyone shares this view, however, and people who care mostly about current generations of humanity may prioritize highly tractable interventions on global catastrophic risks that are not directly “existential”. In practice, however, the two approaches often converge, both on problems and on solutions. A common-sense approach based on simple cost-benefit analysis points us in this direction even in the near-term.

I like that the GCR framing is becoming more popular, e.g. with Open Philanthropy renaming their grant portfolio

We recently renamed our “Longtermism” grant portfolio to “Global Catastrophic Risks”. We think the new name better reflects our view that AI risk and biorisk aren’t only “longtermist” issues; we think that both could threaten the lives of many people in the near future.

I think: read a lot, interview a lot of people who are smarter (or more informed, connected, etc.) than I am about the problem, snowball sample from there, and then write a lot.

I wonder if FP's research director, @Matt_Lerner, has a better answer for me, or for FP researchers in general

Thanks for the question! In 3 years, this might include:

  • Overall, "right of boom" interventions make up a larger fraction of funding (perhaps 1/4), even as total funding grows by an order of magnitude
  • There are major public and private efforts to understand escalation management (conventional and nuclear), war limitation, and war termination in the three-party world.
  • Much more research and investment in "civil defense" and resilience interventions across the board, not just nuclear. So that might include food security, bunkers, transmission-blocking interventions, better P4E, better national stockpiles and distribution systems, resilient crisis-communication systems, etc. 
  • There are multiple ongoing track 2 and 1.5 talks, and eventually official dialogues between the U.S., Russia, and China to better understand each other's views on limited war and find common ground on risk reduction measures and arms control beyond formal treaty-based tools

A few that come to mind:

  • Risk-general/threat-agnostic/all-hazards risk-mitigation (see e.g. Global Shield and the GCRMA)
  • "Civil defense" interventions and resilience broadly defined
  • Intrawar escalation management
  • Protracted great power war

Definitely difficult. I think my colleagues' work at Founders Pledge (e.g. How to Evaluate Relative Impact in High-Uncertainty Contexts) and iterating on "impact multipliers" to make ever-more-rigorous comparative judgments is the most promising path forward. I'm not sure that this is a problem unique to GCRs or climate. A more high-leverage risk-tolerant approach to global health and development faces the same issues, right?

Load more