riceissa

I am Issa Rice. https://issarice.com/

Comments

Tiny Probabilities of Vast Utilities: Concluding Arguments

Ok I see, thanks for the clarification! I didn't notice the use of the phrase "the MIRI method", which does sound like an odd way to phrase it (if MIRI was in fact not involved in coming up with the model).

Tiny Probabilities of Vast Utilities: Concluding Arguments

MIRI and the Future of Humanity Institute each created models for calculating the probability that a new researcher joining MIRI will avert existential catastrophe. MIRI’s model puts it at between and , while the FHI estimates between and .

The wording here makes it seem like MIRI/FHI created the model, but the link in the footnote indicates that the model was created by the Oxford Prioritisation Project. I looked at their blog post for the MIRI model but it looks like MIRI wasn't involved in creating the model (although the post author seems to have sent it to MIRI before publishing the post). I wonder if I'm missing something though, or misinterpreting what you wrote.

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

Did you end up writing this post? (I looked through your LW posts since the timestamp of the parent comment but it doesn't seem like you did.) If not, I would be interested in seeing some sort of outline or short list of points even if you don't have time to write the full post.

EA considerations regarding increasing political polarization

I think the forum software hides comments from new users by default. You can see here (and click the "play" button) to search for the most recently created users. You can see that Nathan Grant and ssalbdivad have comments on this post that are only visible via their user page, and not yet visible on this post.

Edit: The comments mentioned above are now visible on this post.

Existential Risk and Economic Growth

So if stopping growth would lower the hazard rate, it would be a matter of moving from 1% to 0.8% or something, not from 20% to 1%.

Can you say how you came up with the "moving from 1% to 0.8%" part? Everything else in your comment makes sense to me.

Existential Risk and Economic Growth

So you think the hazard rate might go from around 20% to around 1%?

I'm not attached to those specific numbers, but I think they are reasonable.

That's still far from zero, and with enough centuries with 1% risk we'd expect to go extinct.

Right, maybe I shouldn't have said "near zero". But I still think my basic point (of needing to lower the hazard rate if growth stops) stands.

Existential Risk and Economic Growth

What's doing the work for you? Do you think the probability of anthropogenic x-risk with our current tech is close to zero? Or do you think that it's not but that if growth stopped we'd keep working on safety (say developing clean energy, improving relationships between US and China etc.) so that we'd eventually be safe?

I think the first option (low probability of x-risk with current technology) is driving my intuition.

Just to take some reasonable-seeming numbers (since I don't have numbers of my own): in The Precipice, Toby Ord estimates ~19% chance of existential catastrophe from anthropogenic risks within the next 100 years. If growth stopped now, I would take out unaligned AI and unforeseen/other (although "other" includes things like totalitarian regimes so maybe some of the probability mass should be kept), and would also reduce engineered pandemics (not sure by how much), which would bring the chance down to 0.3% to 4%. (Of course, this is a naive analysis since if growth stopped a bunch of other things would change, etc.)

My intuitions depend a lot on when growth stopped. If growth stopped now I would be less worried, but if it stopped after some dangerous-but-not-growth-promoting technology was invented, I would be more worried.

but what about eg. climate change, nuclear war, biorisk, narrow AI systems being used in really bad ways?

I'm curious what kind of story you have in mind for current narrow AI systems leading to existential catastrophe.

AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement

The timing of this AMA is pretty awkward, since many people will presumably not have access to the book or will not have finished reading the book. For comparison, Stuart Russell's new book was published in October, and the AMA was in December, which seems like a much more comfortable length of time for people to process the book. Personally, I will probably have a lot of questions once I read the book, and I also don't want to waste Toby's time by asking questions that will be answered in the book. Is there any way to delay the AMA or hold a second one at a later date?

What are the key ongoing debates in EA?

I don't think you can add the percentages for "top or near top priority" and "at least significant resources". If you look at the row for global poverty, the percentages add up to over 100% (61.7% + 87.0% = 148.7%), which means the table is double counting some people.

Looking at the bar graph above the table, it looks like "at least significant resources" includes everyone in "significant resources", "near-top priority", and "top priority". For mental health it looks like "significant resources" has 37%, and "near-top priority" and "top priority" combined have 21.5% (shown as 22% in the bar graph).

So your actual calculation would just be 0.585 * .25 which is about 15%.

Load More