Different strategies in AI safety pay off on different timescales. For example:

  • Internal policy and governance work at AI developers can matter on very short timelines.
  • US government policy work matters across ~all timelines, while middle powers policy and Chinese domestic policy matter more on medium and longer timelines.
  • Building career capital, and growing the AI safety community, are best suited to medium-long timelines.

To decide which AI timeline(s) to focus on, we need to know:

  1. Which AI timelines are more likely?
  2. Which AI timelines can we have more impact in?

Both questions are key, but 1) has received far more attention than 2). Here, I summarise the main considerations pertaining to 2), i.e. for a given (broad) timelines distribution, should we devote more resources to strategies that pay off on short timelines or longer timelines?

Reasons to act on shorter timelines:

  • Neglectedness: fewer actors will have the chance to work on AI safety in shorter timelines worlds, so each individual in expectation contributes more. And the world is currently not taking a nearterm intelligence explosion very seriously, so those of us who are have an outsized influence.
  • Predictability: We are better able to nearcast how AGI is developed, and therefore can act more effectively because our world models are better. If timelines are quite long (e.g. using a non-LLM paradigm, after a deep AI winter) it is perhaps less clear what we should be doing now. The geopolitical balance of power is also more unpredictable on longer timelines.
  • Firefighting: If you think P(doom) is quite low, then much of the probability mass likely comes on unusually short timelines. So the median ‘doom’ world might be e.g. 20th percentile timelines. If you care most about averting X-risk, you should then be most focused on short timelines.
  • Option value: As we get evidence about the pace of AI progress, it is easier to transition from focusing on short timelines to longer timelines than vice versa, since there is time to pivot and plan a new strategy if timelines are longer. (However some longer timelines plans, such as community building, require serial time efforts, and cannot be speedrun at the end.)

Reasons to act on longer timelines:

  • Career capital: Many people in this space are still quite junior and so in the steeply ascendant part of their career trajectory, so they will be more influential and senior in longer timelines worlds.
  • Serial dependencies: Some types of work may only succeed on long timescales because lots of things need to happen in series (e.g. some types of international agreements, perhaps) or pay off after exponential growth (e.g. movement building). If your personal fit is best for these types of work, that means relying on longer timelines.
  • Political tractability: If you think that the Trump admin is unusually unlikely to heed our advocacy for AI safety regulations, then working on the assumption of longer timelines may be more politically viable.
  • Playing to your outs: If you have a very high P(doom), then conditional on not dying, you might expect timelines to be long, and investing in these worlds with better chances of success might be more impactful (c.f. the logistic success curve model).
  • Flourishing futures: One may think that the very best futures are where most expected value lies, and that these are more achievable in longer timelines when there is more time to prepare for and wisely manage an intelligence explosion. So focusing on these higher-EV scenarios entails working on longer timelines. A countervailing consideration is that autocracies (notably China) may be more likely to lead AI development on longer timelines, reducing the value of those futures.

It is hard to know how these net out! I'm curious for your takes:

Note that I agree with Toby on 'broad timelines', and this slider is very simplistic. Caveats:

  • In theory, we should be weighing up the impact of our actions across all timelines weighted by their likelihood.
  • Different people should specialise more in different timelines, e.g., people currently working at frontier AI developers and key government roles should focus more on very short timelines, and high-schoolers should focus more on longer timelines. This is the case even if their credence over timelines is the same.

It could be interesting to analyze how much effort from the safety community is currently going towards different timelines.

Relevant past work includes:

6

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:
OscarD🔸
2
0
0
40% disagree

What AI timelines are highest impact to act on?

I feel torn, and I think it varies a lot depending on your individual circumstances and opportunities. But overall, I think the arguments for prioritising shorter timelines are a bit stronger.

Curated and popular this week
Relevant opportunities