Ozzie Gooen

6212Berkeley, CA, USAJoined Dec 2014

Bio

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Sequences
1

Amibitous Altruistic Software Efforts

Comments
633

Topic Contributions
1

There are audio version in the Substack. I can see about adding them to the EA Forum more directly in the future.
https://quri.substack.com/p/eli-lifland-on-navigating-the-ai-722

Similar. I think I'm happy for QURI to be listed if it's deemed useful.

Also though, I think that sharing information is generally a good thing, this type included. 

More transparency here seems pretty good to me. That said, I get that some people really hate public rankings, especially in the early stages of them. 

This is very interesting, really happy to see this. As normal, I think it's good to take these with a big grain of salt - but I'm happy to get any halfway-reasonable attempt at a starting point.

One big issue here is that the boundaries are for the 25th/50th/75th percentiles. I would have expected many of there extrapolations to get much wilder (either doom or utopia), but maybe much of that is outside these percentiles.

Even then though, I imagine many readers around here might give >25% odds to at least one of "discontinuous benefit or catastrophic harm", by 2122. 2122 is a really long time.

Many of the confidence bands seem to grow linearly over time, instead of exponentially or similar. This is surprising to me.

One point: I would be pretty enthusiastic about people making "meta-predictions", treating these as baselines. For instance, "In 5 years, these estimates be revised. The difference will be less than 20%. This includes estimates in these 5 years".

That way, onlookers could make quick forecasts on "how correct this set of forecasts" is, using simpler (not time-series) methods.

It seems like a bunch of care/preparation went into having good questions, so I think here I'd have a lot of trust in the interviewer's brief.

Just fli - in this case, we spent some time in the beginning making a very rough outline of what would be good to talk about. Much of this is stuff Eli put forward. I've also known Eli for a while, so had a lot of context going in.

For those who go through this, I'm really curious how important the transcript was. 

In terms of (marginal) work, this was something like:
- In person prep+setup: 3 hours
- Recording: 1.5 hours
- Editing: ~$300, plus 4 hours of my time
- Transcription: $140, plus around ~5 hours of our team's time.

(There was also a lot of time in me sort of messing around and learning the various pieces, but much of that could be later improved. Also, I was really aggressive on removing filler words and pauses. I think this is unusual, in part because it's resource-intensive to do well. )

I'd like to do something like, "Only do transcripts for videos that get 50 upvotes, or we are pretty sure will get 50 upvotes", but I'm not sure. (My guess is that poor transcripts, which means almost anything that takes less than ~$200/3 hours time, will barely be good enough to be useful)

Glad you liked it! 

I'll see about future videos with him.

I'll flag that if others viewing this have more suggestions, or would like to just talk about your takes on things like this, publicly, do message me. 

The transcripts are pretty annoying to do (the hardest labor-intensive part to outsource), but the rest isn't that bad. 

Yea, I assume the full version is impossible. But maybe there are at least some simpler statements that can be inferred? Like, "<10% of transformative AI by 2030."

I'd be really curious to get a better read on what market specialists around this area (maybe select hedge fund teams around tech disruption?) would think.

This seems pretty neat, kudos for organizing all of this! 

I haven't read through the entire report. Is there any extrapolation based on market data or outreach? I see arguments about market actors not seeing to have close timelines, as the main argument that timelines are at least 30+ years out.

I earlier gave some feedback on this, but more recently spent more time with it. I sent these comments to Nuno, and thought they could also be interesting to people here.

  • I think it’s pretty strong and important (as in, an important topic).
  • The first half in particular seems pretty dense. I could imagine some rewriting making it more understandable.
  • Many of the key points seem more encompassing than just AI. “Selection effects”, “being in the Bay Area” / “community epistemic problems”. I think I’d wish these could be presented as separate posts than linked to here (and other places), but I get this isn’t super possible.
  • I think some of the main ideas in the point above aren’t named too well. If it were me, I’d probably use the word “convenience” a lot, but I realize that’s niche now.
  • I really would like more work really figuring out what we should expect of AI in the next 20 years or so. I feel like your post was more like, “a lot of this extremist thinking seems fishy”, more than it was “here’s a model of what will happen and why”. This is fine for this post, but I’m interested in the latter.
  • I think I mentioned this earlier, but I think CFAR was pretty useful to me and a bunch of others. I think there was definitely a faction that wanted them to be much more aggressive on AI, and didn’t really see the point of donating to them besides that. I think my take is that the team was pretty amateur at a lot of key organizational/management things, so did some slipper work/strategy. That said, there was much less money then. There wasn’t a whole lot of great talent for such things. I think they were pretty overvalued at the time to rationalists, but I would consider them undervalued, in terms of what EAs tend to think of them as.
  • The diagrams could be improved. At least, bold/highlight the words “for” and “against. I’m also not sure if the different size blocks are really important
Load More