Hide table of contents

(x-posted on LW

Some have argued that one should tend to act as if timelines are short since in that scenario it's possible to have more expected impact. But I haven't seen a thorough analysis of this argument.

Question: Is this argument valid? And if yes how strong do you think it is? 

The basic argument seems to be: if timelines are short, the field (AI alignment) will be relatively smaller and have made less progress. So you can pick more low-hanging fruits that wouldn't otherwise be picked.

The question affects career decisions. For example, if you optimize for long timelines, you can invest more time into yourself and delay your impact.

The question interacts with the following questions in somewhat unclear ways:

  • How fast do returns to additional work diminish (or increase)?
    • If returns don't diminish, the argument above fails.
    • If the field will grow very quickly, returns will diminish faster.
  • Is your work much more effective when it's early?
    • This may happen because work can be hard to parallelize - ‘9 women can't make a baby in 1 month’. And field-building can be more effective earlier as the field  can compound-grow over time. So someone should start early.
    • If work is most effective earlier, you shouldn’t lose too much time investing in yourself.
  • Is work much more effective at crunch time?
    • If yes, you should focus more on investing in yourself (or do field-building for crunch time) instead of doing preparatory research.
  • If timelines are longer, is this evidence that we'll need a paradigm shift in ML that makes alignment easier/harder?
    • (This question seems less tractable than the others.)
  • Is your comparative advantage to optimize for short or long timelines?
    • For example, young people can contribute more easily given longer timelines and vice versa.

If someone would like to seriously research the overall question, please reach out. The right candidate can get funding.

39

0
0

Reactions

0
0
New Answer
New Comment

1 Answers sorted by

In my opinion, the best discussion of the optimal temporal allocation of work aimed at reducing existential risk is to be found in these two essays:

Cotton-Barratt, Owen (2015) Allocating risk mitigation across time, technical report #2015-2, Future of Humanity Institute.

Ord, Toby (2014) The timing of labour aimed at reducing existential risk, Future of Humanity Institute, July 3.

[anonymous]12
0
0

Those definitely help, thanks! Any additional answers are still useful and I don't want to discourage answers from people who haven't read the above. For example we may have learned some empirical things since these analyses came out.

Curated and popular this week
Relevant opportunities