Hide table of contents
The host has requested RSVPs for this event
25 Going3 Maybe3 Can't Go
Milli | Martin
Johannes
Onur
Manuel Allgaier
Philipp
Yesh
Max_He-Ho
Robert
Felicitas
jeremy
Aspa
Jonas Becker
Salíto
Naveed Ratansi
Michael
Steven Boyla
Guy
Ben
nikos
Fabi
Jeroen
Elena
Janis
Max Roth
Sara
Ada Sevimli
Max Roth
Severin
Johannes (the same who registered above)
Felicitas (the same who registered above)
Jonas (the same who registered above)


Capacity up to 40 people, first come, first serve. Please RSVP!  (no account needed, just click on the buttons above)

How could AI existential risk play out? Choose one of five roles, play through a plausible scenario with other attendees, and discuss it afterward. This was a popular session at the EAGxBerlin conference with ~70 participants over two sessions and positive feedback, so we're doing it again for EA Berlin. 

Everyone is welcome, also if you’re new to AI safety! People underrepresented in the AI safety field are especially welcome. If you're very new to the field, we recommend you read/skim an introductory text such as this 80,000 Hours article or the Most Important Century series summary before, if possible. 

Game plan

19:00 Doors open, snacks

19:30 Game starts (be there by 19:30 latest)

  • Intro (5 min)
  • Choose a role, read your briefing, prepare (10 min)
  • Play in groups of ~5 (30 min)
  • Wrap up (5 min)

~20:30 (optional) Stay longer to discuss more, play a second round or socialize + more snacks 
open end (22h or later)

 (Arrive 19:00-19:30 to join the game or after 20:30 to socialize, leave anytime)

Scenario

Imagine it’s the year 2030, OpenAI just announced plans to train a new model with superhuman capabilities for almost every task such as analyzing politics and economics, strategizing, coding, trading at the stock market, writing persuasively, generating realistic audiovisual content and more. It could do all these for $10/h (at human speed). 

Many are excited about the possibilities and dream of a world in which no human ever has to work again. Others are more worried about the risks. Most experts see no evidence that the model is obviously misaligned or itself agentic, but admit that they cannot guarantee safety either. 

If granted permission, OpenAI would start training two weeks from now and then deploy the model in six weeks. The US White House hurriedly organized a five-day summit to agree on an appropriate response. They invited the following stakeholders (choose one):

  • US president: Joe Biden (anti-aging research had some breakthroughs), Hillary Clinton or other
  • AI company CEO: Sam Altman (OpenAI), Daniela Amodei (Anthropic President) or other
  • A prominent AI safety technical expert such as Yoshua Bengio (University of Montreal)
  • Spokesperson for the G7 Working Group on AI Regulation
  • Head of the US National Security Agency (NSA)

(this is not exhaustive, other actors such as China, NGOs and competing AI companies also seem relevant, but smaller groups allow more engagement) 

Host: I (Manuel Allgaier) am currently on a sabbatical, upskilling in AI governance and exploring various AI safety & EA meta projects. Before that, I ran EA Berlin (2019-21), EA Germany (2021-22) and led the EAGxBerlin 2022 orga team. I learned about this game while I took the AI Safety Fundamentals Governance course. I found this more creative and intuitive approach to AI safety engaging and useful to complement the usual, more academic approaches, so I developed it further and hosted two games at EAGxBerlin with ~35 participants each. I've studied AI safety since 2019 and did some freelance work in the field since ~2019, but I'm by no means an expert. I feel like I know enough to answer most questions that might come up, but I've invited some people with more expertise just in case. 

Logistics

Food: Martin will bring pita bread, vegetables and dips. Feel free to bring anything you want yourself (preferably no meat, ideally vegan). 

Location: We're grateful to the Chaos Computer Club (CCC) Berlin for hosting us in their space! How to find it (in German): https://berlin.ccc.de/page/anfahrt[1]. Please contact @__nobody if you have any questions about CCC or the location.

Questions and feedback welcome via comment, forum pm or Telegram! Looking forward to fun and insightful games :)
- @Manuel Allgaier (Telegram, anonymous feedback) & @Milli | Martin (Telegram)

  1. ^
    This is the entrance: Ring at "Chaos Computer Club Berlin" and press when it clicks (no voice or buzzer).
Comments1
Everyone who RSVP'd to this event will be notified.


Sorted by Click to highlight new comments since:

The next workshop with @Glenn Gregor goes into a wholly different direction: Radically improving mental health through effective emotional work.

Join us on Dec 11 (Mon).

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
Joris 🔸
 ·  · 5m read
 · 
Last week, I participated in Animal Advocacy Careers’ Impactful Policy Careers programme. Below I’m sharing some reflections on what was a really interesting week in Brussels! Please note I spent just one week there, so take it all with a grain of (CAP-subsidized) salt. Posts like this and this one are probably much more informative (and assume less context). I mainly wrote this to reflect on my time in Brussels (and I capped it at 2 hours, so it’s not a super polished draft). I’ll focus mostly on EU careers generally, less on (EU) animal welfare-related careers. Before I jump in, just a quick note about how I think AAC did something really cool here: they identified a relatively underexplored area where it’s relatively easy for animal advocates to find impactful roles, and then designed a programme to help these people better understand that area, meet stakeholders, and learn how to find roles. I also think the participants developed meaningful bonds, which could prove valuable over time. Thank you to the AAC team for hosting this! On EU careers generally * The EU has a surprisingly big influence over its citizens and the wider world for how neglected it came across to me. There’s many areas where countries have basically given a bunch (if not all) of their decision making power to the EU. And despite that, the EU policy making / politics bubble comes across as relatively neglected, with relatively little media coverage and a relatively small bureaucracy. * There’s quite a lot of pathways into the Brussels bubble, but all have different ToCs, demand different skill sets, and prefer different backgrounds. Dissecting these is hard, and time-intensive * For context, I have always been interested in “a career in policy/politics” – I now realize that’s kind of ridiculously broad. I’m happy to have gained some clarity on the differences between roles in Parliament, work at the Commission, the Council, lobbying, consultancy work, and think tanks. * The absorbe
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr