F

frib

76 karmaJoined Apr 2019

Comments
12

Aligned AIs built in 100 years: 50% of the value

What drives this huge drop? Naive utility would be very close to 100%. (Do you mean "aligned ais built in 100y if humanity still exists by that point", which includes extinction risk before 2123?)

Agreed, I'll edit the post.

This roughly lines up with what I had in mind!

In a world in which people used the ITN as a way to do Fermi estimates of impact, I would have written "ITN isn't the only way to do Fermi estimates of impact", but my experience is that people don't use it this way. I have almost never seen an ITN analysis with a conclusion which looks like "therefore,   is roughly X lives per dollars" (which is what I care about). But I agree that "Fermi estimates vs ITN" isn't a good title either: what I argue for is closer to "Fermi estimates (including ITN_as_a_way_to_Fermi_estimate, which sometimes is pretty useful) vs ITN_people_do_in_practice".

That's an ordering!

It's mostly analyses like the ones of 80k Hours, which do not multiply the three together, which might let you think there is no ordering.

Is there a way I can make that more precise?

How would you compare these two interventions:

1: I=10 T=1 N=1

2: I=1 T=2 N = 2

I feel like the best way to do that is to multiply things together.

And if you have error bars around I, T & N, then you can probably do something more precise, but still close in spirit to "multiply the three things together"

I don't understand how the robustness argument works, I couldn't steelman it.

If you want to assess the priority of an intervention by breaking down it's priority Q into I, T & N:

  • if you multiply them together, you didn't make your estimation more robust than using any other breakdown.
  • if you don't, then you can't say anything about the overall priority of the intervention.

What's your strategy to have high robustness estimation of numerical quantities? How do you ground it? (And how is it that it works only when using the ITN breakdown of Q, and not any other breakdown?)

I talked to people who think defaults should be higher. I really don't know where they should be.

I put "fraction of the work your org. is doing" at 5% because I was thinking about a medium-sized AGI safety organization (there are around 10 of them, so 10% each seems sensible), and because I expect that there will be many more in the future, I put 5%.

I put "how much are you speeding up your org." at 1%, because there are around 10 people doing core research in each org., but you are only slightly better than the second-best candidate who would have taken the job, so 1% seemed reasonable. I don't expect this percentage to go down, because as the organization scale up, senior members become more important. Having "better" senior researchers, even if there are hundreds of junior researchers, would probably speed up progress quite a lot.

Where do you think the defaults should be, and why?

I made the text a bit more clear. As for the bug, it didn't affect the end result of the Fermi estimation but how I computed the intermediate "probability of doom" was wrong: I forgot to take into account situations where AGI safety ended up being impossible... It is fixed now.

Thank you for the feedback!

Load more