WilliamKiely

You can send me a message anonymously here: https://www.admonymous.co/will

Wiki Contributions

Comments

Don’t wait – there’s plenty more need and opportunity today

I think our disagreement may just be semantic, though I also have an intuition that something is problematic with your framing (thought it's also hard for me to put my finger on what exactly I don't like about it).

From the link in my previous comment, GiveWell writes: "Our expectation is that we’ll only be rolling over [part of] Open Philanthropy's donation, and we will direct other donor funds on the same schedule we have followed in the past."

I chose to accept GiveWell's framing of things (i.e. that your donation will not be rolled over), but your framing (in which your donation is rolled over) may be equally valid as long as you simultaneously claim that GiveWell will role over a smaller portion of Open Phil's donation than GiveWell claims it will rollover (smaller by the amount of your donation).

Then again, your framing has the issue that if every individual donor who was still considering donating made your claim that GiveWell would roll over their donation, then this would have been false since the sum of individual's donations was expected to be more than the amount that GiveWell intended to rollover. Maybe this wasn't actually an issue though given that it was highly unlikely that GiveWell's communications about rollovers would have caused individuals to donate $110M (the amount GiveWell expected to rollover) less than GiveWell originally forecasted they would.

The Unweaving of a Beautiful Thing

The Nonlinear Library also has a reading of this story. I listened to their version (Spotify link and really liked it.

I also listened to a few minutes of your version afterwards to compare and my conclusion is that I prefer the Nonlinear Library's version by a significant margin. The quality of the text-to-speech software they used by the Nonlinear Library is the best I've listened to.

And to be clear, I'm mentioning my preference for their version to your reading here just in case there is anyone else there who hasn't tried the Nonlinear Library's readings yet due to having a cached belief that TTS is still really bad, but who actually would enjoy the quality of their readings a lot / enough to want to listen to their content more in the future.

Comments for shorter Cold Takes pieces

Yes, you're quite right, thanks. I failed to differentiate between my independent impression and my all-things-considered view when thinking about and writing the above. Thinking about it now, I realize ~5% is basically my independent impression, not my all-things-considered view. My all-things-considered view is more like ~20% Zvi wins--and if you told me yours was 40% then I'd update to ~35%, though I'd guess yours is more like ~25%. I meta-updated upwards based on knowing Zvi's view and the fact that Holden updated upwards on Zvi to 50%. (And even if I didn't know their views, my initial naive all-things-considered forecasts would very rarely be as far from 50% as 5% is unless there's a clear base rate that is that extreme.).

That said, I haven't read much of what Zvi has written in general and the one thing I do remember reading of his on Covid (his 12/24/20 Covid post) I strongly disagreed with at the time (and it turns out he was indeed overconfident). I recognize that this probably makes me biased against Zvi's judgment, leading me to want to meta-update on his view less than I probably should (since I hear a lot of people think he has good judgment and there probably are a lot of other predictions he's made which were good that I'm just not aware of), but at the same I really don't personally have good evidence of his forecasting track record in the way that I do of e.g. your record, so I'm much less inclined to meta-update a lot on him than I would e.g. on you.

Additionally, I did think of a plausible error theory earlier after writing the 5% forecast (specifically: a plausible story for how Zvi could have accepted such a bet at terrible odds). (I said this out loud to someone at the time rather than type it:) My thought was that Zvi's view in the conceptual disagreement they were betting on seems much more plausible to me than Zvi's position in the bet operationalization. That is, there are many scenarios that would make it look like Zvi was basically right that might technically cache out as a Holden win according to the exact betting terms described here. For example, there might be a huge Omicron wave--the largest Covid wave yet--and cases might drop quickly afterwards and it might be the last wave of the pandemic, and yet despite all of that, perhaps only 50% of the cases after January 1, 2022 happen before the end of February rather than the 75% necessary for Zvi to win.

Zvi thinks there’s a 70% chance of the following: “Omicron will blow through the US by 3/1/2022, leading to herd immunity and something like the ‘end’ of the COVID-19 pandemic.”

Holden proposed a bet and apparently they went back and forth a few emails on the bet operationalization before agreeing. My hypothesis then is that Zvi was anchored on his 70% confidence from this statement and didn't take the time to properly re-evaluate his forecast for the specific bet operationalization they ultimately agreed to. I can easily see him only spending a small amount of time thinking about the bet operationalization and agreeing to it without realizing that it's very different than the concept he was originally assigning 70% to due to wanting to agree to a bet on principle and not wanting to go back and forth a few more times by email.

Of course this is just a story and perhaps he did give consider the bet operationalization carefully. But I think even smart people with good judgment can easily make mistakes like this. If Zvi read my comment and responded, "Your hypothesis is wrong; I thought carefully about the bet operationalization and I'm confident I made a good bet" I'd meta-update on him a lot more--maybe up to 50% like Holden did. But I don't know enough about Zvi to know whether the mere fact that he agreed to a public bet with Holden is strong evidence that he thought about the exact terms carefully and wasn't biased by his strong belief in his original 70% statement.

(Noting to myself publicly that I want to learn to be more concise with my thoughts. This comment was too long.)

Comments for shorter Cold Takes pieces

Bet with Zvi about Omicron:

Fun exercise: which side of this bet would you want to be on?

I'm definitely on Holden's side of the bet.

In summary, I assign 80% to Holden's outcome, 15% to the ambiguous "push" outcome, and 5% to Zvi's outcome.

This is a low-information forecast, but there seem to be three outcomes to the bet, and Zvi's outcome clearly seems to be the least likely:

(1) For Zvi to win, Covid cases (of all variants, including any future ones) need to average ~18 times higher over the first two months of 2022 than over the following 12 months (math: 18 = (0.75/2) / (0.25/12)).

18 is such a high ratio given Covid's track record so far. The ratio of the 7-day-average of US cases from its high (1/11/2021 = ~256,000/day) to its low (6/21/2021 = ~12,000/day) is ~21, barely higher than ~18. Plus, those two weeks were months apart, giving time for cases to drop off.

I don't see a very plausible way to get that kind of ratio for the first two months of 2022 over the 12 months afterwards. E.g. It seems unlikely that cases would drop off sufficiently quickly at the end of February to avoid adding a large number of cases in March (and to a lesser degree, April, etc). (i.e. Even if cases virtually disappeared later in 2022 (such that the second period being 12 months instead of 6 doesn't matter much), it's really hard for cases to drop off so quickly that the number of cases from March 1 onwards don't end up being at least a third of the number of cases from January and February.) The prior on that steep drop-off happening by the end of February is quite low and the fact that it's only a little over a week until January and cases are still on the rise doesn't make it seem more likely that there will be a steep drop-off before March. There's just no way Zvi could know that that is likely going to happen. I don't need to read his post to know that he doesn't know that.

Given this simple consideration that cases would have to drop off exceptionally fast at just the right time for Zvi's outcome to happen, I assign a 5% chance to Zvi's outcome happening.

It was really just strongly disagreeing that Zvi's outcome seemed likely that made me want to write this comment, but I'll go ahead and write estimates for the other two outcomes:

(2) For Holden to win, cases in the 12-month period after February 2022 have to exceed one-third of cases in the first two months of 2022 before a new variant comes along and takes over. I haven't heard of any new variant after Omicron. Such a new variant would have to be much more contagious than Omicron for it to have a chance to take over everywhere in the US in time to stop Omicron's cases from exceeding that one-third threshold in March-May. I hear Omicron is super contagious so that seems unlikely. Therefore, my low-information forecast is that Holden's outcome is 80% likely.

(3) For the bet to be a "push" (i.e. for the bet to resolve ambiguously), a new variant or variants need to take over before Omicron-and-previous-variant cases after March 1 exceed the one-third threshold mentioned above. This seems more likely than Zvi's outcome, but still not that likely. I'll assign 15% to the ambiguous outcome.

[Linkpost] Don't Look Up - a Netflix comedy about asteroid risk and realistic societal reactions (Dec. 24th)

The premise of the film Seeking a Friend for the End of the World (2012) is that:

a mission to stop an incoming 70-mile wide asteroid known as "Matilda" has failed and that the asteroid will make impact in three weeks, destroying all life on Earth

This is taken as inevitable and accepted by the characters in the film. The film ends with the Earth getting destroyed, implying human extinction.

Exposure to 3m Pointless viewers- what to promote?

Brainstorming some concrete wording for a personal intro:

"Hi, I'm Patrick. I'm currently pursuing a master's degree in artificial intelligence. A startling fact that keeps me up at night is that humanity spends more on ice cream every year than on ensuring that the technologies we develop do not destroy us. I intend to help change this with my career and with the Giving What We Can pledge I've taken to give 10% of my income to whichever organisations can most effectively use it to improve the lives of others, now and in the years to come."

(The hyperlinked text is taken verbatim from the sources linked.)

The thought behind this wording is that it ensures you actually introduce yourself, but also manages to subtly fit in two plugs--the first being that if a curious viewer tries fact-checking the ice cream factoid by Googling something like "humanity spends more on ice cream than technology not destroying us" the top search results will all be about Toby Ord's The Precipice, and the second obviously being GWWC / effective giving.

WilliamKiely's Shortform

Every.org now has "Givelists"--I just had this one created: https://giveli.st/xrisk

Existential Risk

This is a list of three nonprofits that do work intended to help reduce existential risk from unaligned AI.

Donation split evenly between 3 charities:

Machine Intelligence Research Institute

Berkeley Existential Risk Initiative

The Center for Human-Compatible AI (CHAI)

A case for the effectiveness of protest

I also glanced at your CEA spreadsheet briefly and saw some "pessimistic" impact numbers which were positive. If I interpret you correctly, that "pessimistic" scenario is supposed to represent a 5th percentile outcome. If that's right, I'd note that my naive impression is that >95% confidence that XR helps reduce carbon is overconfident. Of course you're much better informed, but that still seems to me like it would be hard to justify being that confident.

Load More