I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).
I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).
I can help with career advice, prioritisation, and quantitative analyses.
Thanks for the post, Carl.
- As funding expands in focused EA priority issues, eventually diminishing returns there will equalize with returns for broader political spending, and activity in the latter area could increase enormously: since broad political impact per dollar is flatter over a large range political spending should either be a very small or very large portion of EA activity
Great point.
Agreed, David.
instead believe only humans and animals experiencing well-being is good
Nitpick. I would say humans, animals, microorganisms, and digital beings.
Thanks for sharing, Brendan. You may want to try RoastMyPost, and then share any feedback you may have with @Ozzie Gooen.
Thanks for the post, Joey.
Doing neglectedness right
“Considering the main two areas I am considering, food systems climate is more neglected than clean energy climate.”
I think this sort of comparison makes a lot of sense. It is trying to look at the real oppperuntiy cost of what else would be supported by people (or yourself) considering the area. [...]
I think you are suggesting people say i) "X is more neglected than Y" if ii) "X is more cost-effective than Y at the margin". I believe it would be better for people to simply say ii) as applied to the relevant context. For example, that funding X with 10 k$ would save more lives than funding Y by the same amount. As you pointed out, i) could be interpreted in many different ways, and therefore can lead to misunderstandings.
I think a decent proxy for neglect is: what is the group right on the edge?
This is very unclear to me. For individual welfare per fully-healthy-animal-year proportional to "individual number of neurons"^"exponent", and a range of 0.5 to 1.5 for "exponent", which I believe covers reasonable best guesses, I estimate that the Shrimp Welfare Project’s (SWP’s) Humane Slaughter Initiative (HSI) has increased the welfare of shrimps via increasing the adoption of electrical stunning 0.00167 (= 2.06*10^-5/0.0123) to 1.67 k (= 20.6/0.0123) times as cost-effectively as GiveWell's top charities increase the welfare of humans. So I can easily see HSI increasing the welfare of shrimps much more or less cost-effectively than GiveWell's top charities increase the welfare of humans.
Thanks for the post, Tristan. I am pessimistic about finding interventions that robustly increase welfare (in expectation) accounting for soil animals and microorganisms. I do not think electrically stunnning qualifies, although it decreases intense pain experienced by the target beneficiaries, and the ratio between effects on target beneficiaries and other organisms is much smaller than for the vast majority of interventions.
TL;DR: Almost all suffering in the world today is experienced by wild animals
This is unclear to me. I estimated the above is not the case for individual welfare per fully-healthy-animal-year proportional to "individual number of neurons"^"exponent", and "exponent" = 1.5, which is the upper bound of the range of 0.5 to 1.5 that I guess covers reasonable best guesses. For that exponent, I calculate the absolute value of the total welfare of wild birds, mammals, and finfishes is 4.12 % of the total welfare of humans, and that the absolute value of the total welfare of soil ants, termites, springtails, mites, and nematodes is 1.89 % of the total welfare of humans.
Thanks for this work. I find it valuable.
AIs could have negligible welfare (in expectation) even if they are conscious. They may not be sentient even if they are conscious, or have negligible welfare even if they are sentient. I would say the (expected) total welfare of a group (individual welfare times population) matters much more for its moral consideration than the probability of consciousness of its individuals. Do you have any plans to compare the individual (expected hedonistic) welfare of AIs, animals, and humans? You do not mention this in the section "What’s next".
Do you have any ideas for how to decide on the priors for the probability of sentience? I agree decisions about priors are often very arbitrary, and I worry they will have significantly different implications.
I like that your report the results for each perspective. People usually give weights that are at least 0.1/"number of models", which are not much smaller than the uniform weight of 1/"number of models", but this could easily lead to huge mistakes. As a silly example, if I asked random people with age 7 about whether the gravitational force between 2 objects is proportional to "distance"^-2 (correct answer), "distance"^-20, or "distance"^-200, I imagine I would get a significant fraction picking the exponents of -20 and -200. Assuming 60 % picked -2, 20 % picked -20, and 20 % picked -200, a respondant may naively conclude the mean exponent of -45.2 (= 0.6*(-2) + 0.2*(-20) + 0.2*(-200)) is reasonable. Alternatively, a respondant may naively conclude an exponent of -9.19 (= 0.933*(-2) + 0.0333*(-20) + 0.0333*(-200)) is reasonable giving a weight of 3.33 % (= 0.1/3) to each of the 2 wrong exponents, equal to 10 % of the uniform weight, and the remaining weight of 93.3 % (= 1 - 2*0.0333) to the correct exponent. Yet, there is lots of empirical evidence against the exponents of -45.2 and -9.19 which the respondants are not aware of. The right conclusion would be that the respondants have no idea about the right exponent, or how to weight the various models because they would not be able to adequately justify their picks. This is also why I am sceptical that the absolute value of the welfare per unit time of animals is bound to be relatively close to that of humans, as one may naively infer from the welfare ranges Rethink Priorities (RP) initially presented, or the ones in Bob Fischer's book about comparing welfare across species, where there seems to be only 1 line about the weights. "We assigned 30 percent credence to the neurophysiological model, 10 percent to the equality model, and 60 percent to the simple additive model".
Mistakes like the one illustrated above happen when the weights of models are guessed independently of their output. People are often sensitive to astronomical outputs, but not to the astronomically low weights they imply. How do you ensure the weights of the models to estimate the probability of consciousness are reasonable, and sensitive to their outputs? I would model the weights of the models as very wide distributions to represent very high model uncertainty.