Bio

Participation
4

I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How others can help me

I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How I can help others

I can help with career advice, prioritisation, and quantitative analyses.

Comments
2973

Topic contributions
40

Got it. Thanks. Here is what @Ryan Greenblatt says in the piece you linked about fatalities from AI takeover.

My guess is that, conditional on AI takeover, around 50% of currently living people die3 in expectation and literal human extinction4 is around 25% likely.5

Ryan, could you clarify what is the timeline for the 50 % of humans dying in expectation conditional on takeover? 

The 25 % chance of human extinction refers to the 300 years following takeover, and excludes voluntary informed extinction, which I like because it would not be obviously bad. Here is footnote 4.

Concretely, literally every human in this universe is dead (under the definition of dead included in the prior footnote, so consensual uploads don't count). And, this happens within 300 years of AI takeover and is caused by AI takeover. I'll put aside outcomes where the AI later ends up simulating causally separate humans or otherwise instantiating humans (or human-like beings) which aren't really downstream of currently living humans. I won't consider it extinction if humans decide (in an informed way) to cease while they could have persisted or decide to modify themselves into very inhuman beings (again with informed consent etc.).

Hi Emile. I see this is your 1st comment on the EA Forum. Welcome.

I think the difference in uncertainty is mostly explained by the surveys covering different people, not by ESPAI's predictions having been made around 2.25 years earlier. ESPAI 2023 involved "2,778 researchers who had published in top-tier artificial intelligence (AI) venues", and "took place in the fall of 2023". The 2026 Summit on Existential Security (SES) involved "leaders and key thinkers in the x-risk and AI safety communities", and "Survey data comes from the 59 respondents who consented to their answers being shared publicly", and "was collected in February 2026".


Buck, I would be curious to know what is your median time from weak AGI to artificial superintelligence (ASI) in this question from Metaculus, and your best guess for the (unconditional) probability of human extinction in the next 10 years.

Thanks for the helpful clarifications, Melanie. They made sense to me.

Hi Buck. True. I still think the survey underestimates the variance in median AI timelines. Below are the results for the 2023 Expert Survey on Progress in AI (ESPAI). Half of the responses for the median date of full automation of tasks or occupations range from around 2045 to some date after 2120. In the survery of the post, half of the responses for the median date of AGI range from around 2032 to 2037. For the 25th percentile date of full automation, half of ESPAI's responses range from around 2030 to 2100. In the survey of the post, half of the responses for the 25th percentile date of AGI range from around 2028 to 2032. AGI in the survey of the post does not have the exact same meaning as full automation of taks or occupations, but I am pretty confident my broad point stands if I am reading the graph below correctly.

CDF of ESPAI survey showing median and central 50% of expert responses.

What is your probability of human extinction in the 10 years following the achievement of artificial superintelligence (ASI) as defined by AI Futures?

image.png

Hi Michael.

(Or: Why I don't see how the probability of extinction could be less than 25% on the current trajectory)

Lesss than 25 % from now until when?

Thanks for the comment, titotal. I agree the survey underestimates the variance in AI timelines and risk.

The AI Futures, which is know for AI 2027, had super broad timelines for artificial superintelligence (ASI) timelines on January 26. The difference between the 90th and 10th percentile was 168 years for Daniel Kokotajlo (2027 to 2195), and 137 years for Eli Lifland (2028 to 2165).

image.png

image.png

There is also huge variation in assessment of AI extinction risk. In the Existential Risk Persuasion Tournament (XPT), among domain experts and superforecasters, the 5th and 95th percentile AI extinction risk from 2023 to 2100 were 9.45*10^-7 and 37.0 % (excluding the 7 people who guessed a risk of exactly 0; here are the results).

Load more