J

JimmyJ

158 karmaJoined Jun 2019

Comments
16

I am writing a post on the effects of this one. If anyone is interested, I will try to finish

I'm interested.

Metaculus currently gives a 16% chance to the claim that total deaths before 2021 will be greater than 11.6 M.

Could you please provide the JHU questions and predictions for those of us who don't want to sign up?

I suggest the question you've linked has an artificially low upper bound

The question has an upper bound of 100 million deaths, not cases. I don't think that is "artificially low".

Maybe you are confusing Hurford's link with this old question, which does have an artificially low upper bound and deals with cases instead of deaths.

All metaculus questions are about cases, not deaths.

Most of them are, but the one Hurford linked to is explicitly about the number of deaths: "How many people will die as a result of the 2019 novel coronavirus (2019-nCoV) before 2021?".

I am not sure where you found the claim you cite

If you look at the bottom of the page, it says that the community predicts a ~3% chance of greater than 100 million deaths. Previously, it said 2% for the same number of deaths.

Just to be absolutely clear about what I am referring to, here is a screenshot of the relevant part of the UI.

Note: despite it being kind of neat (in my humble opinion) to develop such a scoring system, and getting mixed-to-positive feedback about it, I don't seem to have gotten attention from EA or EA-adjacent media, journalists, podcasts, etc.

Have you tried reaching out to anyone?

The opposite trend occurred for SARS (in the same class as nCoV-2019), which originally had around a 2-5% deaths/cases rate but ended up with >10% once all cases ran their full course.

In a comment from October 2019, Ben Pace stated that there is currently no actionable policy advice the AI safety community could give to the President of the United States. I'm wondering to what extent you agree with this.

If the US President or an influential member of Congress was willing to talk one-on-one with you for a couple hours on the issue of AI safety policy, what advice would you give them?

I skimmed the post, but I couldn't find what this is responding to. Could you provide a link for context?

Answer by JimmyJDec 31, 201913
0
0

The founders of PETRL include Daniel Filan, Buck Shlegeris, Jan Leike, and Mayank Daswani, all of whom were students of Marcus Hutter. Brian Tomasik coined the name.

Of these five people, four are busy doing AI safety-related research. (Filan is a PhD student involved with CHAI, Shlegeris works for MIRI, Leike works for DeepMind, and Tomasik works for FRI. OTOH, Daswani works for a cybersecurity company in Australia.)

So, my guess is that they became too busy to work on PETRL, and lost interest. It's kind of a shame, because PETRL was (to my knowledge) the only organization focused on the ethics of AI-qua-moral patient. However, it seems pretty plausible to me that the AI safety work the PETRL founders are doing now is more effective.

In July 2017, I emailed PETRL asking them if they were still active:

Dear PETRL team,
Is PETRL still active? The last blog post on your site is from December 2015, and there is no indication of ongoing research or academic outreach projects. Have you considered continuing your interview series? I'm sure you could find interesting people to talk to.

The response I received was:

Thanks for reaching out. We're less active than we'd like to be, but have an interview in the works. We hope to have it out in the next few weeks!

That interview was never published.

Load more