LL

Lauro Langosco

125 karmaJoined Nov 2019Cambridge, UKwww.laurolangosco.com/

Comments
18

I'd like to make sure that the person who read the grant takes AI safety seriously and much more seriously than other X-risks

FWIW I fit that description in the sense that I think AI X-risk is higher probability. I imagine some / most others at LTFF would as well.

Speaking for myself: it depends a lot on whether the proposal or the person seems promising. I'd be excited about funding promising-seeming projects, but I also don't see a ton of low-hanging fruit when it comes to AI gov research.

This is a hard question to answer, in part because it depends a lot on the researcher. My wild guess for a 90%-interval is $500k-$10m

Yes, everyone apart from Caleb is part-time. My understanding is LTFF is looking make more full-time hires (most importantly a fund chair to replace Asya).

That's fair; upon re-reading your comment it's actually pretty obvious you meant the conditional probability, in which case I agree multiplying is fine.

I think the conditional statements are actually straightforward - e.g. once we've built something far more capable than humanity, and that system "rebels" against us, it's pretty certain that we lose, and point (2) is the classic question of how hard alignment is. Your point (1) about whether we build far-superhuman AGI in the next 30 years or so seems like the most uncertain one here.

Hi Geoffrey! Yeah, good point - I agree that the right way to look at this is finer-grained, separating out prospects for success via different routes (gov regulation, informal coordination, technical alignment, etc).

In general I quite like this post, I think it elucidates some disagreements quite well.

Thanks!

I’m not sure it represents the default-success argument on uncertainty well.

I haven't tried to make an object-level argument for either "AI risk is default-failure" or "AI risk is default-success" (sorry if that was unclear). See Nate's post for the former.

Re your argument for default-success, you only need to have 97% certainty for 1-4 if every step was independent, which they aren't.

I do agree that discussion is better pointed to discussing this evidence than gesturing to uncertainty

Agreed.

Sure, but that's not a difference between the two approaches.

However, there are important downsides to the "cause-first" approach, such as a possible lock-in of main causes

I'm surprised by this point - surely a core element of the 'cause-first' approach is cause prioritization & cause neutrality? How would that lead to a lock-in?

Thanks for the post, it was an interesting read!

Responding to one specific point: you compare

Community members delegate to high-quality research, think less for themselves but more people end up working in higher-impact causes

to

Community members think for themselves, which improves their ability to do more good, but they make more mistakes

I think there is actually just one correct solution here, namely thinking through everything yourself and trusting community consensus only insofar as you think it can be trusted (which is just thinking through things yourself on the meta-level).

This is the straightforwardly correct thing to do for your personal epistemics, and IMO it's also the move that maximizes overall impact. It would be kind of strange if the right move was for people to not form beliefs as best they can, or to act on other people's beliefs rather than their own?

(A sub-point here is that we haven't figured out all the right approaches yet so we need people to add to the epistemic commons.)

Load more