Hide table of contents

tl;dr: A large amount of citizen assemblies about AI will soon be run, called “Global coalition for inclusive AI”. A window is about to open for a short amount of time informing politicians and the public about what citizens think of AI in general. I'm suggesting funders should reevaluate the priority of communication, AI Safety workers should consider speaking in public, and specialists of epistemics/disagreements/debate should coordinate and join these efforts.

Epistemic status: This is written quickly and may be heavily modified in the coming days.

On Tuesday, February 11th, Missions Publiques, a french think tank announced an international coalition for Citizen Assemblies on AI, in partnership with Yale and Stanford. Why is this important ?

1-Substantially discussing AI Safety in the assembly is not guaranteed

If you look up the program and speakers of the pilot, which happened in ENS Paris, you find some mention of Safety, but in a vague and allusive form. 

However, on some occasions the organizers can turn out to be open to voluntary contributions to the program. If you are interested and have the right credentials, I'd encourage you to contact the initiative in your local country.

It may be that participation in the Citizen Assembly as a deliberating member is voluntary (rather than purely randomized) or randomized but conditional on volunteering first. If this is the case, I highly suggest you to compare your counterfactuals, there are surely some well-read people for whom the commitment would be valuable enough.

However, please note that I’m not sure of what your final impact consists of. My belief is that the outcome of citizen assemblies has the power to legitimate or downgrade the credibility and importance of AI Safety for key decision makers -I’m worried about the default, because I expect most assemblies, except the US/UK, to strongly dilute the topic, contributing to making it invisible. I don’t believe assemblies will lead to strong commitments (as has been seen in France for the Climate Assembly).

From personal experience, once in the assembly, talking about Safety and even X-risk is fine ! The setup allows for genuine discussion and minimizes strong disagreements.

Factors that could increase your counterfactual impact in the case of voluntary participation could be: 

  • You live outside of the US/UK, or in countries where AI Safety is not mainstream / seriously considered yet, or considered with skepticism/hostility. In this case, participating as someone who speaks about Safety prevents misrepresentation or simply leaving Safety out of the picture.
    • This is, except if you expect the outcome of the US / UK Citizen Assembly to be more influential than the aggregation of the other assemblies, since they’re home to world-tier efforts in AI research. If you have dual citizenship and the US is part of them, this is an important variable (I’m skeptical of this argument, however).
  • [Unsure]: You’re professionally unaffiliated with existing AI Safety efforts, and this reinforces your legitimacy for participating in a citizen’s assembly as a deliberating member.

If you have strong credentials, you can serve as an expert during the Assembly. I don't know if you can help organize it.

Note : if you live in a country like the US or the UK where there are significantly more Safety-aware and technical people, deciding whether to participate creates a need for coordination where you may want to prioritize knowledgeable people

Not very plausibly, but plausibly enough to warrant action, the whole event might turn out to be a very short action window with some amount of value lock-in as a result. To give you a sense of the situation, imagine we ran a World Climate Citizen Assembly, but we’re in the early 70s. If people had cached the thought ‘‘but we did a citizen assembly in the 70s and concluded air pollution was important, and climate change a problem for the long-term’’, this would undermine our current ability to react accordingly This also might turn out to be the only democratic deliberation we will ever have on AI on a global level. The ‘long reflection’ might be just now.

2-Low awareness of AI Safety Calls for Careful Action

By default, I don’t think AI Safety will come up as a concern in most assemblies outside a few countries. The safeguard is therefore to educate the population at large on AI Safety, rather than hope to become a participant or appointed expert. The plan is that enough people are exposed to high-quality content, so that one of them ends up in the Citizen Assembly.

Although the vague talking points of AI Safety are clearly becoming mainstream, the actual, technical models of AI Safety from the public’s perspective compared to the state of the art has yet to reach the current situation on e.g. Climate change. Only few people around me could name an empirical study on AI safety, which helped make the concerns much more legitimate. I strongly encourage, after reflexion :

1- Funders / donors to re-assess the priority of producing explainers, such as multilingual Kurzgesagt partnerships, original videos, adverts or collabs with big YouTubers or podcasters from different linguistic areas, documentaries, or high-fidelity media outreach which include key distillations of up-to-date AIS observations and arguments that could eventually make it to future participants. This of course has to be done in full transparency -if communicators want to mention other concerns upon learning that it is motivated by a Citizen Assembly, then their concern should be taken into account.

2- Scientists to consider speaking in public. Real Citizen Assemblies rely on guest speakers who can be requested by the public. You should expect some governments (such as France) to be hostile to inviting Safety-aware speakers, yet not as much as to oppose the participants inviting a speaker they themselves thought of. If you have credentials, I’d strongly encourage you to dedicate some time for this, so as to become someone the average Joe spontaneously thinks of when wondering « who could we request as a speaker ? ». This means at least appearing on TV or a highly viewed YouTube channel.

3- [Low confidence] Specialists of epistemics and disagreement handling to consider joining one of the national CA projects and actively coordinate together. If you have a background in deliberation, at any scale, AI-assisted or not, your help might be needed. This is a rare opportunity to underline potential improvements in the epistemics of the CA process, or just point CA organizers to things that already exist and help improve Citizen Assemblies (such as Double Crux, Forecasting, etc).

What I’m not suggesting : 

1-Biasing the conversation

A citizen assembly on AI will go through many topics. I’m not asking for participants to hijack the theme and make it all about Safety. It is a crucial theme, but the aim of the assembly is to discuss broader concerns. This process is followed diligently, and attempts to bias the conversation will credibly lead you to have your participation removed. Conflicts of interest should of course be a no-go as a participant, or at least be transparently flagged.

2-Being ideological, tempering with the deliberative process

A citizen assembly is a place for dialogue. If you plan going there with a Soldier Mindset and a fleet of pro-Safety hooligans, please don’t. It’s important to treat these questions with an open mind, given all the uncertainties at stake, and to interact courteously. I think the greatest benefit we can have is making existing concerns and science visible, so being adversarial with people who disagree seems a particularly useless endeavor. Also, please do not flood any Citizen Assembly with excessive EA applications.

3-Being careless

I’m not suggesting to sell a solution or a particular threat model as being the “definitive” one, whether acting as an expert or a participant, nor to use the first arguments you can think of. Talking about Safety requires carefulness -when learning about the risks of AGI, some people see this as a motivation to join the race. If you end up participating in the assembly, coordinate yourself with experts to flag down common errors and misconceptions on the pro-safety side.

A last caveat: I’m not an AI Safety expert, either technical or in terms of governance. If you’re interested, I highly suggest you read the comments below, as I’m expecting important nuances to be brought up.

If you are interested, please join this discord server as a means to coordinate and be kept up to date. You can also send me a DM on the forum.

Comments2


Sorted by Click to highlight new comments since:

Where and when are these supposed to occur, and how can we track that for our respective countries?

Good question. The international coalition is still being built right now, which means that no official dates have been decided. I've heard a credible source say the assemblies are planned to start in June. I'll update the post and discord server as soon as I get more information.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f