Jack R

Stanford CS Master's Student


Sorted by New


Predict responses to the "existential risk from AI" survey

Thanks for this Rob--I was going to post this myself but you beat me to it :)

Also, wow--I was systematically wrong. I think my (relative) x-risk optimism affected my predictions majorly.

Predict responses to the "existential risk from AI" survey

SPOILER: My predictions for the mean answers from each org. The first number is for Q2, the second is for Q1 (EDIT: originally had the order of the questions wrong):

OpenAI: 15%, 11%
FHI: 11%, 7%
DeepMind: 8%, 6%
CHAI/Berkeley: 18%, 15%
MIRI: 60%, 50%
Open Philanthropy: 8%, 6%

My attempt to think about AI timelines

I think it’s hard to automate things

Can you elaborate on why you think this?

Shapley values: Better than counterfactuals

Instead of "counterfactually" should we say "Shapily" now?