It might help to get a more fine-grained sense of what people's AI timeline forecasts actually are. I'd be asking questions like:
- Are their predictions public?
- Are they making specific, empirically verifiable predictions?
- Are they making verifiable predictions not only about a world-ending catastrophe, but about steps along the way? In other words, is there a way to practically evaluate their calibration, short of living through, or dying from, the creation of GAI?
- Have any of their predictions resolved already? If so, were they correct or incorrect?
- Do they provide explicit reasoning for their predictions?
I am personally very skeptical of the idea that willingness to make a bet is a meaningful substitute for a track record of well-calibrated past predictions. Within certain communities adjacent to ours, making bets is a status move and garners attention, and I find that whatever informational value a bet might otherwise contain is completely confounded in that context. I might feel differently about a private bet that was not publicly disclosed until after it resolved.
Finding out trustworthy information that somebody has earned a substantial amount of money through repeated bets on AI outcomes in an anonymous prediction market is the sort of thing that would give me confidence in their timelines. Otherwise, my utter lack of expertise and the absence of consensus in the field means that I do not have the sophistication to know who to defer to.
Ah, thanks for sharing! I assume the Carlsmith report is this - https://arxiv.org/abs/2206.13353?