I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.
Similar. I think I'm happy for QURI to be listed if it's deemed useful.
Also though, I think that sharing information is generally a good thing, this type included.
More transparency here seems pretty good to me. That said, I get that some people really hate public rankings, especially in the early stages of them.
This is very interesting, really happy to see this. As normal, I think it's good to take these with a big grain of salt - but I'm happy to get any halfway-reasonable attempt at a starting point.
One big issue here is that the boundaries are for the 25th/50th/75th percentiles. I would have expected many of there extrapolations to get much wilder (either doom or utopia), but maybe much of that is outside these percentiles.
Even then though, I imagine many readers around here might give >25% odds to at least one of "discontinuous benefit or catastrophic harm", by 2122. 2122 is a really long time.
Many of the confidence bands seem to grow linearly over time, instead of exponentially or similar. This is surprising to me.
One point: I would be pretty enthusiastic about people making "meta-predictions", treating these as baselines. For instance, "In 5 years, these estimates be revised. The difference will be less than 20%. This includes estimates in these 5 years".
That way, onlookers could make quick forecasts on "how correct this set of forecasts" is, using simpler (not time-series) methods.
It seems like a bunch of care/preparation went into having good questions, so I think here I'd have a lot of trust in the interviewer's brief.
Just fli - in this case, we spent some time in the beginning making a very rough outline of what would be good to talk about. Much of this is stuff Eli put forward. I've also known Eli for a while, so had a lot of context going in.
Same for QURI (Assuming OP ever evaluates/funds QURI)
For those who go through this, I'm really curious how important the transcript was.
In terms of (marginal) work, this was something like:
- In person prep+setup: 3 hours
- Recording: 1.5 hours
- Editing: ~$300, plus 4 hours of my time
- Transcription: $140, plus around ~5 hours of our team's time.
(There was also a lot of time in me sort of messing around and learning the various pieces, but much of that could be later improved. Also, I was really aggressive on removing filler words and pauses. I think this is unusual, in part because it's resource-intensive to do well. )
I'd like to do something like, "Only do transcripts for videos that get 50 upvotes, or we are pretty sure will get 50 upvotes", but I'm not sure. (My guess is that poor transcripts, which means almost anything that takes less than ~$200/3 hours time, will barely be good enough to be useful)
Glad you liked it!
I'll see about future videos with him.
I'll flag that if others viewing this have more suggestions, or would like to just talk about your takes on things like this, publicly, do message me.
The transcripts are pretty annoying to do (the hardest labor-intensive part to outsource), but the rest isn't that bad.
Yea, I assume the full version is impossible. But maybe there are at least some simpler statements that can be inferred? Like, "<10% of transformative AI by 2030."
I'd be really curious to get a better read on what market specialists around this area (maybe select hedge fund teams around tech disruption?) would think.
This seems pretty neat, kudos for organizing all of this!
I haven't read through the entire report. Is there any extrapolation based on market data or outreach? I see arguments about market actors not seeing to have close timelines, as the main argument that timelines are at least 30+ years out.
I earlier gave some feedback on this, but more recently spent more time with it. I sent these comments to Nuno, and thought they could also be interesting to people here.
There are audio version in the Substack. I can see about adding them to the EA Forum more directly in the future.
https://quri.substack.com/p/eli-lifland-on-navigating-the-ai-722