Bio

Participation
4

I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How others can help me

I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How I can help others

I can help with career advice, prioritisation, and quantitative analyses.

Comments
2978

Topic contributions
40

Hi Ben and Richard.

Next year, as well as continued engagement on urban infrastructure, we’ll work on new policy areas such as fertility control and pesticide policy. [...]

[...]

We selected these focus areas – after extensive research and consultation with wild animal welfare experts – because we believe some policy options look realistic, robust and helpful. That is:

  • Tractable in the near term (e.g. wild animal welfare scientists have a recommendation, and there’s an upcoming consultation or policy window), and;
  • Robustly positive across a range of worldviews (i.e. we seek to minimize backfire risks where possible, including individual and population-level risks), and;
  • High in expected value, and/or helpful for spreading our values.

I believe controlling the fertility of rodents instead of killing them may impact soil ants and termites much more than rodents even if the population of rodents remains constant. So I do not think it robustly increases welfare accounting for all animals.

Hi Mal.

As a result, these individuals hope to identify "ecologically inert" interventions that don't affect population dynamics or have cascading effects. Corporate welfare campaigns might be one sort of intervention that clears this bar.

I think chicken welfare reforms may impact soil ants and termites much more than chickens.

I (and several others) think we could reasonably view a handful of ["ecologically inert"] interventions as worth pursuing under this mindset. Mostly, these sorts of interventions change how humans kill animals or control populations, such that suffering is decreased without changing the net population outcome. Examples might include stunning wild-caught fish before slaughter or replacing rodenticides with fertility control on islands.

I believe controlling the fertility of rodents instead of killing them may impact soil ants and termites much more than rodents even if the population of rodents remains constant.

I would be curious to know your thoughts on this discussion between me and Anthony DiGiovanni about imprecise expected values.

Thanks, Michael. Do not worry about not having replied earlier.

I agree that the weights/coefficients in the model could end up quite arbitrary, and I would expect them to if someone tried to set them precisely.

I am still thinking that expected values should be precise, or at least practically precise. However, I think the weights of models should be modelled as distributions instead of constants as in Bob's book about comparing welfare across species, and Rethink Priorities' (RP's) digital consciousness model (DCM).

We may be able to give some arguments for some bounds on the weights, and some structural constraints on how the weights relate to each other

I agree.

Within these constraints, the choices are very subjective and highly arbitrary.

I agree.

there may be no fact of the matter at all

I disagree.

Got it. Thanks. Here is what @Ryan Greenblatt says in the piece you linked about fatalities from AI takeover.

My guess is that, conditional on AI takeover, around 50% of currently living people die3 in expectation and literal human extinction4 is around 25% likely.5

Ryan, could you clarify what is the timeline for the 50 % of humans dying in expectation conditional on takeover? 

The 25 % chance of human extinction refers to the 300 years following takeover, and excludes voluntary informed extinction, which I like because it would not be obviously bad. Here is footnote 4.

Concretely, literally every human in this universe is dead (under the definition of dead included in the prior footnote, so consensual uploads don't count). And, this happens within 300 years of AI takeover and is caused by AI takeover. I'll put aside outcomes where the AI later ends up simulating causally separate humans or otherwise instantiating humans (or human-like beings) which aren't really downstream of currently living humans. I won't consider it extinction if humans decide (in an informed way) to cease while they could have persisted or decide to modify themselves into very inhuman beings (again with informed consent etc.).

Hi Emile. I see this is your 1st comment on the EA Forum. Welcome.

I think the difference in uncertainty is mostly explained by the surveys covering different people, not by ESPAI's predictions having been made around 2.25 years earlier. ESPAI 2023 involved "2,778 researchers who had published in top-tier artificial intelligence (AI) venues", and "took place in the fall of 2023". The 2026 Summit on Existential Security (SES) involved "leaders and key thinkers in the x-risk and AI safety communities", and "Survey data comes from the 59 respondents who consented to their answers being shared publicly", and "was collected in February 2026".


Buck, I would be curious to know what is your median time from weak AGI to artificial superintelligence (ASI) in this question from Metaculus, and your best guess for the (unconditional) probability of human extinction in the next 10 years.

Thanks for the helpful clarifications, Melanie. They made sense to me.

Hi Buck. True. I still think the survey underestimates the variance in median AI timelines. Below are the results for the 2023 Expert Survey on Progress in AI (ESPAI). Half of the responses for the median date of full automation of tasks or occupations range from around 2045 to some date after 2120. In the survery of the post, half of the responses for the median date of AGI range from around 2032 to 2037. For the 25th percentile date of full automation, half of ESPAI's responses range from around 2030 to 2100. In the survey of the post, half of the responses for the 25th percentile date of AGI range from around 2028 to 2032. AGI in the survey of the post does not have the exact same meaning as full automation of taks or occupations, but I am pretty confident my broad point stands if I am reading the graph below correctly.

CDF of ESPAI survey showing median and central 50% of expert responses.

What is your probability of human extinction in the 10 years following the achievement of artificial superintelligence (ASI) as defined by AI Futures?

image.png

Hi Michael.

(Or: Why I don't see how the probability of extinction could be less than 25% on the current trajectory)

Lesss than 25 % from now until when?

Load more