WilliamKiely

WilliamKiely's Comments

New article from Oren Etzioni

It feels like Etzioni is misunderstanding Bostrom in this article, but I'm not sure. His point about Pascal's Wager confuses me:

Some theorists, like Bostrom, argue that we must nonetheless plan for very low-probability but high-consequence events as though they were inevitable

Etzioni seems to be saying that Bostrom argues that we must prepare for short AI timelines even though developing HLMI on a short timeline is (in Etzioni's view) a very low-probability event?

I don't know whether Bostrom thinks this or not, but isn't Bostrom's main point that even if AI systems sufficiently-powerful to cause an existential catastrophe are not coming for at least a few decades (or even a century or longer), we should still think about and see what we can do today to prepare for the eventual development of such AI systems if we believe that there are good reasons to think that they may cause an x-catastrophe when they eventually are developed and deployed?

It doesn't seem that Etzioni addresses this, except to imply that he disagrees with the view by saying it's unreasonable to worry about AI risk now and by saying that we'll (definitely?) have time to adequately address any existential risk that future AI systems may pose if we wait to start addressing those risks until after the canaries start collapsing.

New article from Oren Etzioni

Etzioni's implicit argument against AI posing a nontrivial existential risk seems to be the following:

(a) The probability of human-level AI being developed on a short timeline (less than a couple decades) is trivial.

(b) Before human-level AI is developed, there will be 'canaries collapsing' warning us that human-level AI is potentially coming soon or at least is no longer a "very low probability" on the timescale of a couple decades.

(c) "If and when a canary “collapses,” we will have ample time before the emergence of human-level AI to design robust “off-switches” and to identify red lines we don’t want AI to cross"

(d) Therefore, AI does not pose a nontrivial existential risk.

It seems to me that if there is a nontrivial probability that he is wrong about 'c' then in fact it is meaningful to say that AI does pose a nontrivial existential risk that we should start preparing for before the canaries he mentions start collapsing.

New article from Oren Etzioni

Etzioni also appears to agree that once canaries start collapsing it is reasonable to worry about AI threatening the existence of all of humanity.

As Andrew Ng, one of the world’s most prominent AI experts, has said, “Worrying about AI turning evil is a little bit like worrying about overpopulation on Mars.” Until the canaries start dying, he is entirely correct.

Concerning the Recent 2019-Novel Coronavirus Outbreak

I accepted a bet on January 30th with a friend with the above terms. Nobody else offered to bet me. Since then, I have updated my view. I now give a ~60% probability that there will be over 10,000 deaths. https://predictionbook.com/predictions/198256

My update is mostly based on (a) Metaculus's estimate of the median number of deaths updating from ~3.5k to now slightly over ~10K (https://www.metaculus.com/questions/3530/how-many-people-will-die-as-a-result-of-the-2019-novel-coronavirus-2019-ncov-before-2021/) and also (b) some naive extrapolation of the possible total number of deaths based on the Feb 4th death data here: https://www.worldometers.info/coronavirus/coronavirus-death-toll/

Concerning the Recent 2019-Novel Coronavirus Outbreak

I'm willing to bet up to $100 at even odds that by the end of 2020, the confirmed death toll by the Wuhan Coronavirus (2019-nCoV) will not be over 10,000. Is anyone willing to take the bet?

EA Giving Tuesday, Dec 3, 2019: Instructions for Donors

We updated it again with different language, hopefully incorporating the spirit of your feedback. We didn't want to discourage people who were willing to put in more time (say an hour or more) from putting in that much time by mentioning "10-20 minutes". Many donors would benefit from much more time spent preparing and practicing.

EA Giving Tuesday, Dec 3, 2019: Instructions for Donors

Thanks, done. Updated it to:

  • Speed. Per our instructions, fill out your donation early so you can calmly finalize your donation with one click by clicking the green "Donate" button within the first second of the start of the match.
    • In 2017, the match lasted 86 seconds; in 2018, it lasted 15 seconds; this year we expect it to run out much faster, plausibly in one second (personal median estimate: ~4 seconds).
EA Giving Tuesday, Dec 3, 2019: Instructions for Donors

Thanks Brian, I updated the 'US, $500 or more' instructions page with a note that "Someone who follows these instructions, which should take only 10-20 minutes of pre-work, should be able to donate within 1-3 seconds."

EA Giving Tuesday, Dec 3, 2019: Instructions for Donors

Thanks for the helpful feedback, Mike! I just updated the website to improve the language based on your recommendation. Here's what I put:

EDIT/UPDATE 12/2/2019:

Facebook eliminated the "Confirm Your Donation" prompt this morning, so we made the following change:

New version:

Last year the matching funds ran out in 15 seconds. We expect it to run out much faster this year. We recommend clicking clicking the green "Donate" button within the first second after the match start time of December 3rd, 2019, at 08:00:00am EST (05:00:00am PST).

Old version:

Last year the matching funds ran out in 15 seconds. We expect it to run out much faster this year. We recommend starting the donation process early and clicking the green "Donate" button on your $500+ donation prior to the official match start, that way you can finalize your donation with one click by clicking the final gray "Donate" button within the first second after the match start time of December 3rd, 2019, at 08:00:00am EST (05:00:00am PST).
The Frontpage/Community distinction
it's not uncommon for a post to be accidentally published before it is finished

I suggest adding a "Are you sure you want to publish?" confirmation prompt when users click "Submit" on their draft posts to address this.

Load More