This question is on my list of things I should ask but think are too dumb to.

A lot of people have estimates and forecasts on when they think a transformative artificial intelligent/superintelligent system will come into existence. Yet it doesn't seem clear how they come to these estimates.

I tend to differ to others who are a lot more senior and have much more expertise simply because that's the easy way out. But I'd like to try and get a better sense of what the process of doing this looks like. 

I thought this image from DALLE was cool and fitting here.
New Answer
New Comment

2 Answers sorted by

For me, a good first-pass was: go to some more structured exercise like the Carlsmith report, make up my own numbers, check where those numbers differ most from others’ numbers, dig into possible disagreements, and repeat.

Ah, thanks for sharing! I assume the Carlsmith report is this -

Joel Becker
Yes! :)

It might help to get a more fine-grained sense of what people's AI timeline forecasts actually are. I'd be asking questions like:

  • Are their predictions public?
  • Are they making specific, empirically verifiable predictions?
  • Are they making verifiable predictions not only about a world-ending catastrophe, but about steps along the way? In other words, is there a way to practically evaluate their calibration, short of living through, or dying from, the creation of GAI?
  • Have any of their predictions resolved already? If so, were they correct or incorrect?
  • Do they provide explicit reasoning for their predictions?

I am personally very skeptical of the idea that willingness to make a bet is a meaningful substitute for a track record of well-calibrated past predictions. Within certain communities adjacent to ours, making bets is a status move and garners attention, and I find that whatever informational value a bet might otherwise contain is completely confounded in that context. I might feel differently about a private bet that was not publicly disclosed until after it resolved.

Finding out trustworthy information that somebody has earned a substantial amount of money through repeated bets on AI outcomes in an anonymous prediction market is the sort of thing that would give me confidence in their timelines. Otherwise, my utter lack of expertise and the absence of consensus in the field means that I do not have the sophistication to know who to defer to.