In this episode of the Making Sense podcast with Sam Harris, Barton Gellman from The Brennan Center For Justice discusses how he "organized five nonpartisan tabletop exercises premised on an authoritarian candidate winning the presidency to test the resilience of democratic institutions".

"The 175 participants across five exercises were Republicans, Democrats, and independents; liberals, conservatives, and centrists. They included veterans of the first Trump administration and previous administrations of both parties. "

This seems like an extremely valuable exercise when trying to prepare for long-tail risks.

--------------


I often think about about this post. It asks the seriously neglected question: Why was the AI Alignment community so unprepared for this moment?

I think we're going to get competent Digital Agents soon (< 2 years). I think when they arrive, unless we work urgently, we will again feel like we were extremely unprepared.

I'd like to see either a new AI Safety organisation created to run these exercises with key decision makers (e.g. Government, Industry, maybe Academia), or have an existing org (CAIS?) take on the responsibility.

Every morning we should be repeating the mantra: there are no parents in the room. It is just us.

--------------

More here on the program:

"In May and June 2024, the Brennan Center organized five nonpartisan tabletop exercises premised on an authoritarian candidate winning the presidency to test the resilience of democratic institutions. The antidemocratic executive actions explored in the scenarios were based on former President Donald Trump’s public statements about his plans for a potential second term in office.  

We do not predict whether Trump will win the November election, and we take no position on how Americans should cast their votes. What we have done is simulated how authoritarian elements of Trump’s agenda, if he is elected, might play out against lawful efforts to check abuses of power.

The 175 participants across five exercises were Republicans, Democrats, and independents; liberals, conservatives, and centrists. They included veterans of the first Trump administration and previous administrations of both parties.  

Among them were former governors, former cabinet members, former state attorneys general, former members of the House and Senate, retired flag and general officers, labor leaders, faith leaders, grassroots activists, members of the Brennan Center staff, and C-suite business executives. In the exercises, they represented cabinet secretaries, executive agency chiefs, law enforcement officers, the military chain of command, Congress, the judiciary, state and local governments, news media, and elements of civil society. "

14

0
0

Reactions

0
0
Comments4
Sorted by Click to highlight new comments since:

I'm aware of at least two efforts to run table top exercises on AI takeoff with decision makers so I don't think this is particularly neglected, but I do think it's valuable.

Good to know:

  1. Can you share more about these efforts?

  2. What makes you think it isn't neglected? I.e. what makes there being two efforts mean it isn't neglected? Part of me wonders whether many national governments should consider such exercises (but I wouldn't want to take it to military, only to have them become excited by capabilities).

I don't know if this is what Caleb had in mind, but Intelligence Rising is in this genre I think.

Building on the above: the folks behind Intelligence Rising actually published a paper earlier this month, titled ‘Strategic Insights from Simulation Gaming of AI Race Dynamics’. I’ve not read it myself, but it might address some of your wonderings, @yanni. Here’s the abstract:

We present insights from ‘Intelligence Rising’, a scenario exploration exercise about possible AI futures. Drawing on the experiences of facilitators who have overseen 43 games over a four-year period, we illuminate recurring patterns, strategies, and decision-making processes observed during gameplay. Our analysis reveals key strategic considerations about AI development trajectories in this simulated environment, including: the destabilising effects of AI races, the crucial role of international cooperation in mitigating catastrophic risks, the challenges of aligning corporate and national interests, and the potential for rapid, transformative change in AI capabilities. We highlight places where we believe the game has been effective in exposing participants to the complexities and uncertainties inherent in AI governance. Key recurring gameplay themes include the emergence of international agreements, challenges to the robustness of such agreements, the critical role of cybersecurity in AI development, and the potential for unexpected crises to dramatically alter AI trajectories. By documenting these insights, we aim to provide valuable foresight for policymakers, industry leaders, and researchers navigating the complex landscape of AI development and governance.

[emphasis added]

Curated and popular this week
Relevant opportunities