riceissa

I am Issa Rice. https://issarice.com/

Comments

Long-Term Future Fund: Ask Us Anything!

In the April 2020 payout report, Oliver Habryka wrote:

I’ve also decided to reduce my time investment in the Long-Term Future Fund since I’ve become less excited about the value that the fund can provide at the margin (for a variety of reasons, which I also hope to have time to expand on at some point).

I'm curious to hear more about this (either from Oliver or any of the other fund managers).

Long-Term Future Fund: Ask Us Anything!

I am wondering how the fund managers are thinking more long-term about encouraging more independent researchers and projects to come into existence and stay in existence. So far as I can tell, there hasn't been much renewed granting to independent individuals and projects (i.e. granting for a second or third time to grantees who have previously already received an LTFF grant). Do most grantees have a solid plan for securing funding after their LTFF grant money runs out, and if so what do they tend to do?

I think LTFF is doing something valuable by giving people the freedom to not "sell out" to more traditional or mass-appeal funding sources (e.g. academia, established orgs, Patreon). I'm worried about a situation where receiving a grant from LTFF isn't enough to be sustainable, so that people go back to doing more "safe" things like working in academia or at an established org.

Any thoughts on this topic?

Tiny Probabilities of Vast Utilities: Concluding Arguments

Ok I see, thanks for the clarification! I didn't notice the use of the phrase "the MIRI method", which does sound like an odd way to phrase it (if MIRI was in fact not involved in coming up with the model).

Tiny Probabilities of Vast Utilities: Concluding Arguments

MIRI and the Future of Humanity Institute each created models for calculating the probability that a new researcher joining MIRI will avert existential catastrophe. MIRI’s model puts it at between and , while the FHI estimates between and .

The wording here makes it seem like MIRI/FHI created the model, but the link in the footnote indicates that the model was created by the Oxford Prioritisation Project. I looked at their blog post for the MIRI model but it looks like MIRI wasn't involved in creating the model (although the post author seems to have sent it to MIRI before publishing the post). I wonder if I'm missing something though, or misinterpreting what you wrote.

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

Did you end up writing this post? (I looked through your LW posts since the timestamp of the parent comment but it doesn't seem like you did.) If not, I would be interested in seeing some sort of outline or short list of points even if you don't have time to write the full post.

EA considerations regarding increasing political polarization

I think the forum software hides comments from new users by default. You can see here (and click the "play" button) to search for the most recently created users. You can see that Nathan Grant and ssalbdivad have comments on this post that are only visible via their user page, and not yet visible on this post.

Edit: The comments mentioned above are now visible on this post.

Existential Risk and Economic Growth

So if stopping growth would lower the hazard rate, it would be a matter of moving from 1% to 0.8% or something, not from 20% to 1%.

Can you say how you came up with the "moving from 1% to 0.8%" part? Everything else in your comment makes sense to me.

Existential Risk and Economic Growth

So you think the hazard rate might go from around 20% to around 1%?

I'm not attached to those specific numbers, but I think they are reasonable.

That's still far from zero, and with enough centuries with 1% risk we'd expect to go extinct.

Right, maybe I shouldn't have said "near zero". But I still think my basic point (of needing to lower the hazard rate if growth stops) stands.

Existential Risk and Economic Growth

What's doing the work for you? Do you think the probability of anthropogenic x-risk with our current tech is close to zero? Or do you think that it's not but that if growth stopped we'd keep working on safety (say developing clean energy, improving relationships between US and China etc.) so that we'd eventually be safe?

I think the first option (low probability of x-risk with current technology) is driving my intuition.

Just to take some reasonable-seeming numbers (since I don't have numbers of my own): in The Precipice, Toby Ord estimates ~19% chance of existential catastrophe from anthropogenic risks within the next 100 years. If growth stopped now, I would take out unaligned AI and unforeseen/other (although "other" includes things like totalitarian regimes so maybe some of the probability mass should be kept), and would also reduce engineered pandemics (not sure by how much), which would bring the chance down to 0.3% to 4%. (Of course, this is a naive analysis since if growth stopped a bunch of other things would change, etc.)

My intuitions depend a lot on when growth stopped. If growth stopped now I would be less worried, but if it stopped after some dangerous-but-not-growth-promoting technology was invented, I would be more worried.

but what about eg. climate change, nuclear war, biorisk, narrow AI systems being used in really bad ways?

I'm curious what kind of story you have in mind for current narrow AI systems leading to existential catastrophe.

Load More