Bio

Participation
4

I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How others can help me

I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How I can help others

I can help with career advice, prioritisation, and quantitative analyses.

Comments
2836

Topic contributions
40

Thanks for the relevant post, Wladimir and Cynthia. I strongly upvoted it. Do you have any practical ideas about how to apply the Sentience Bargain framework to compare welfare across species? I would be curious to know your thoughts on Rethink Priorities' (RP's) research agenda on valuing impacts across species.

Thanks for the great post, Lukas. I strongly upvoted it. I also agree with your concluding thoughts and implications.

Thank you all for the very interesting discussion.

I think addressing the greatest sources of suffering is a promising approach to robustly increase welfare. However, I believe the focus should be on the greatest sources of suffering in the ecosystem, not in any given population, such that effects on non-target organisms can be neglected. Electrically stunning farmed shrimps arguably addresses one of the greatest sources of suffering of farmed shrimps, and the ratio between its effects on target and non-target organisms is much larger than for the vast majority of interventions, but I still do not know whether it increases or decreases welfare (even in expectation) due to potentially dominant effects on soil animals and microorganisms.

I expect the greatest sources of suffering in the ecosystem to be found in the organisms accounting for the most suffering in the ecosystem. However, I would say much more research on comparing welfare across species is needed to identify such organisms. I can see them being vertebrates, invertebrates, trees, or microorganisms.

I worry very specific unrealistic conditions will be needed to ensure the effects on non-target organisms can be neglected if it is not known which organisms account for the most suffering in the ecosystem. So I would prioritise research on comparing welfare across species over mapping sources of suffering in ecosystems.

Thanks, Zoë. I see funders are the ones deciding what to fund, and that you only provide advice if they so wish, as explained below. What if funders ask you for advice on which species to support? Do you base your advice on the welfare ranges presented in Bob's book? Have you considered recommending research on welfare comparisons across species to such funders, such as the projects in RP's research agenda on valuing impacts across species?

Q: Do Senterra Funders staff decide how funders make grant decisions?

A: No, each Senterra member maintains full autonomy over their grantmaking. Some Senterra members seek Senterra’s philanthropic advising, in which Senterra staff conduct research and make recommendations specific to the donor’s interests. Some Senterra members engage in collaborative grantmaking facilitated by Senterra staff. Ultimately, it’s up to each member to decide how and where to give.

Thanks for the great post, Srdjan. I strongly upvoted it.

Fair point, Nick. I would just keep in mind there may be very different types of digital minds, and some types may not speak any human language. We can more easily understand chimps than shrimps. In addition, the types of digital minds driving the expected total welfare might not speak any human language. I think there is a case for keeping an eye out for something like digital soil animals or microorganisms, by which I mean simple AI agents or algorithms, at least for people caring about invertebrate welfare. On the other end of the spectrum, I am also open to just a few planet-size digital beings being the driver of expected total welfare.

Thanks for the post, Noah. I strongly upvoted it.

  • 5. How much welfare total capacity might digital minds have relative to humans/other animals
    • a. Related questions include: the estimated scale of digital minds, moral weights-esque projects, which part of the model would have moral weight.

I think this is a very important uncertainty. Discussions of digital minds overwhelmingly focus on the number of individuals, and probability of consciousness or sentience. However, one has to multiply these factors by the expected individual welfare per year conditional on consciousness or sentience to get the expected total welfare per year. I believe this should eventually be determined for different types of digital minds because there could be huge differences in their expected individual welfare per year. I did this for biological organisms assuming expected individual welfare per fully-healthy-organism-year proportional to "individual number of neurons"^"exponent", and to "energy consumption per unit time at rest [basal metabolic rate (BMR)] at 25 ºC"^"exponent", and found potentially super large differences in the expected total welfare per year.

I think much more work on welfare comparisons across species is needed to conclude which interventions robustly increase welfare. I do not know about any intervention which robustly increases welfare due to potentially dominant uncertain effects on soil animals and microorganisms. I suspect work on welfare comparisons across different digital minds will be important for the same reason.

In a 2019 report from Rethink Priorities (though it could be very different now for various reasons), Saulius Simcikas found that $1 spent on corporate campaigns 9-120 years of chicken lives could be affected (excluding indirect effects which could be very important too).

Animal Charity Evaluators (ACE) estimated The Humane League's (THL) work targeting layers in 2024 helped 11 layers per $. The Welfare Footprint Institute (WFI) assumes layers have a lifespand of "60 to 80 weeks for all systems", around 1.36 chicken-years (= (60 + 80)/2*7/365.25). So I estimate THL's work targeting layers in 2024 improved 14.8 chicken-years per $ (= 11*1.36), which is close to the lower bound from Saulius you mention above.

Thanks for sharing, Kevin and Max. Are you planning to do any cost-effectiveness analyses (CEAs) to assess potential grants? I may help with these for free if you are interested.

Global wealth would have to increase a lot for everyone to become billionaire. There are 10 billion people. So everyone being a billionaire would require a global wealth of 10^19 $ (= 10*10^9*1*10^9) for perfect distribution. Global wealth is 600 T$. So it would have to become 16.7 k (= 10^19/(600*10^12)) times as large. For a growth of 10 %/year, it would take 102 years (= LN(16.7*10^3)/LN(1 + 0.10)). For a growth of 30 %/year, it would take 37.1 years (= LN(16.7*10^3)/LN(1 + 0.30)).

I was considering hypothetical scenarios of the type "imagine this offer from MIRI arrived, would a lab accept"

When would the offer from MIRI arrive in the hypothetical scenario? I am sceptical of an honest endorsement from MIRI today being worth 3 billion $, but I do not have a good sense of what MIRI will look like in the future. I would also agree a full-proof AI safety certification is or will be worth more than 3 billion $ depending on how it is defined.

With your bets about timelines - I did 8:1 bet with Daniel Kokotajlo against AI 2027 being as accurate as his previous forecast, so not sure which side of the "confident about short timelines" do you expect I should take.

I was guessing I would have longer timelines. What is your median date of superintelligent AI as defined by Metaculus?

Load more