Hide table of contents

I feel like I'm taking crazy pills. 

It appears that many EAs believe we shouldn't pause AI capabilities until it can be proven to have < ~ 0.1% chance of X-risk. 

Put less confusingly, it appears many EAs believe we should allow capabilities development to continue despite the current X-risks.

This feels obviously a terrible thing to me.

What are the best reasons EA shouldn't be pushing for an indefinite pause on AI capabilities development??

18

0
0

Reactions

0
0
New Answer
New Comment


3 Answers sorted by

Messy practical reasons.

I agree with Larks that most of us would press a magic button to slow down AI progress on dangerous paths.

But we can't, which raises two problems:

  1. Tractability.
  2. Effectiveness of moderate success. If you get a non-global slowdown, or a slowdown that ends too early, or a slowdown regime that's evadable, or if you differentially slow cautious labs, or even if you just differentially slow leading labs, the effect is likely net-negative. (Increasing multipolarity among labs + differentially boosting less-cautious actors + compute overhang enabling rapidly scaling up training compute. See Slowing AI: Foundations.)

(I'd be excited to talk about proposals more specific than 'push for a pause,' or outcomes more specific than 'pause until proven <0.1% doom.' Who is doing the pausing; what are the rules? Or maybe you don't have specific proposals/outcomes in mind, in which case I support you searching for new great ideas, but it's not like others haven't tried and failed already.)

[anonymous]3
0
0

Thanks for the comment Zach. 

1. Can you elaborate on your comment "Tractability"?

2. I'm less worried about multipolarity because the leading labs are so far ahead AND I have short timelines (~ 10 years). My guess is if you had short timelines, you might agree?

3. If we had moderate short term success, my intuition is that we've actually found an effective strategy that could then be scaled. I worry that your thinking is basically pointing to 'it needs to be an immediately perfect strategy or don't bother!'

2
Zach Stein-Perlman
1. Pushing a magic button would be easy; affecting the real world is hard. Even if slowing is good, we should notice whether there exist tractable interventions (or: notice interventions' opportunity cost). 2. Nope, my sense is that DeepMind, OpenAI, and Anthropic do and will have a small lead over Meta, Inflection, and others, such that I would be concerned (re increasing multipolarity among labs) about slowing DeepMind, OpenAI, and Anthropic now. (And I have 50% credence on human-level AI [noting this is underspecified] within 9 years.) 3. Yeah, maybe, depending. I'm relatively excited about "short term success" that seems likely to support the long-term policy regimes I'm excited about, like global monitoring of compute and oversight of training runs with model evals for dangerous capabilities and misalignment, maybe plus a compute cap. I fear that most pause-flavored examples of "short term success" won't really support great long-term plans. (Again, I'd be excited to talk about specific proposals/outcomes/interventions.)

This sequence is still in progress but is the best collection of resources that I know of regarding slowing AI (including an indefinite pause).

Worth cross-posting to the EA Forum. @Zach Stein-Perlman 

[anonymous]2
0
0

Thank you Ben I will take a look :)

I think the best reason is that it's not within the Overton window :)

Last I checked, the whole point of the Overton window is that you can only shift it by advocating for ideas outside of it.

4[anonymous]
I'm confused - where is the evidence it is outside the overton window?
5
Peter Berggren
Don't really think there is any; in fact, there's plenty of evidence to the contrary, from the polls I've seen.
5
David_Moss
We found 51% support, 25% opposition in our polling here.
[anonymous]-9
2
8

the overton window is a crutch for EAs who don't believe in the power of advocacy

I've worked in advocacy for EA causes for a bit, so I definitely believe in the power of it, but I also think the Overton window is a pretty crucial consideration for folks who are trying to mobilize the public. I'm guessing this is a popular view among people who work in advocacy for EA causes, but I might be wrong.

To be fair, I do think there could be value in making bold asks outside the Overton window. James Ozden has a really good piece about this. I think groups like DxE and PETA have done this for the animal movement, and it seems totally plausible to me that this has had a net positive effect.

But on the other hand, I think lots of the tangible changes we've seen for farmed animals have come from the incremental welfare asks that groups like Mercy For Animals and The Humane League focus on (disclaimer: I worked at the latter). The fact that these groups have been very careful to keep their asks within the Overton window has had the benefit of (1) helping advocates gain broad-based public support; and (2) getting corporations and policymakers on board and willing to actually adopt the changes they are asking for.

It seems likely to me that the second point applies for AI safety, but I'm not sure about the first and would probably need to see more polling or message testing to know. Nonetheless I suspect these concerns might be part of why the AI pause ask hasn't been as widely adopted among EAs (although a number of them did sign the FLI letter).

[anonymous]1
2
4

The public is very concerned about powerful AI and want something done about it. 

If anyone is outside the overton window its EAs.

I agree that the public has been pretty receptive to AI safety messaging. Much more than I would have expected a few years ago.

It sounds like you already have some takes on this question — in that case, it could be worth writing something up to make the case for why EAs should be advocating for a pause. I’d be happy to offer feedback if you do.

[anonymous]0
0
0

Thats very generous of you, thanks Tyler!

Comments5
Sorted by Click to highlight new comments since:
Larks
25
10
0

There's a big gap between "believe we should press it if presented with a magic global pause button" and "think pro-pause advocacy is the most efficient use of their time on the current margin". I suspect the majority of Xrisk-concerned EAs would press such a button if given the chance.

[anonymous]-3
0
3
  1. i think you'd be surprised how many wouldn't press such a button
  2. what publicly available thinking have you seen on the potential impact of advocacy in this context (within EA)? i'm interested how you've formed this opinion

You can run a poll to try to find out the answer to 1) if you want!

Pato
12
2
0

it appears many EAs believe we should allow capabilities development to continue despite the current X-risks.

Where do you get this from?

Also, this:

have < ~ 0.1% chance of X-risk. 

means p(doom) <~ 0.001

[anonymous]4
1
0

"Where do you get this from?"

  • the lack of content and discussion regarding a pause on the forum / podcasts / twitter
  • the dismal success rate of grant requests to drive such a strategy
  • vibe
Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would