Bio

Participation
4

I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How others can help me

I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How I can help others

I can help with career advice, prioritisation, and quantitative analyses.

Comments
3078

Topic contributions
41

Hi David. I think there are at least 2 Asimov's corollaries. Peter's Asimov's corollary is Asimov's corollary to Parkinson's law. The post you linked to presents Asimov's corollary to Clarke's 1st law. I also liked this corollary.

When, however, the lay public rallies around an idea that is denounced by elderly but distinguished scientists and supports that idea with great fervor and emotion—the distinguished but elderly scientists are then, after all, probably right.

Hi Riccardo.

The idea that experiential intensity simply scales down with the number of neurons seems hard to accept: it implies that simpler organisms live something like a barely-there flicker of experience, which also places us humans at the apex of perceived intensity in the universe.

Very simple organisms could still matter a lot despite having much less intense experiences. I estimate farmed animals and soil invertebrates have 1.87 and 253 times as many neurons as humans. The graph below has more detail. Nematodes are the animals with the least neurons, with an adult caenorhabditis elegans having 302 neurons, but I estimate soil nematodes have 169 times as many neurons in total as humans.

humans at the apex of perceived intensity in the universe

There are animals with more neurons than humans. Short-finned pilot whales and african elephants have 128 billion and 257 billion neurons, 1.49 (= 128/86) and 2.99 (= 257/86) times as many as humans.

I also think there's a distinction worth drawing between the "dimensionality" of an experience (how many qualitative states a mind can occupy) and its intensity. A simple mind might have very few "keys," but still hit each of them hard.

I agree.

Hi Aaron. I liked this post.

Math

Some days, I don't have any particular motivation. That's when I turn to expected utility.

Very funny.

Hi James. Thanks for the valuable post.

Hi Charles.

But even Farmkind’s more generous number is still disappointingly low. To many EAs I have spoken to, it seems to imply that going vegan is only “worth” a paltry $276—or alternatively, that donating at least $276 to pro-animal charities “buys” them the freedom from being vegan.

I estimate based on results from Animal Charity Evaluators (ACE) that donating 2.34 $/person-year (= 138/59.0) to The Humane League (THL) offsets the effects on farmed animals of (random) people in the United States (US), 0.848 % (= 2.34/276) of FarmKind's estimate.

we're dead OR there's infinite abundance

Your median date for this is in 2035?

Maybe would be open to "you transfer 1k to me now (2026), I give you interest-indexed 2k in 2035" or whatever odds make sense.

I made a similar bet in the past, but the one above is not worth it for me. Global stocks had annual real returns of 5 % from 1900 to 2022, and I expect faster growth from 2026 to 2035. For an annual real growth of 7.5 %, the bet would only be worth it if I won 1.92 (= (1 + 0.075)^(2035 - 2026)) times as many 2026-$ as those I initially invested. Assuming a 75 % chance of winning given the risk of you not paying me, I would need to win 2.56 (= 1.92/0.75) times as many 2026-$ as those I initially invested. Are you open to a bet like the one here, but resolving at the end of 2034, and involving a potential gain for me of 3 times as many 2026-$ as those I initially invested (for example, 1 k 2026-$)? If so, are you also open to disclaiming and confirming your identify? I would want this to ensure you have a greater incentive to respect the bet.

This kind of sounds like you're thinking of pain as a sort of physical magnitude, like weight or charge.

Yes.

I'm more inclined towards functionalist interpretations of welfare

Which kind of functionalism? I am very sceptical of at least computational functionalism (CF). Any algorithm run by a digital computer can be executed with pen and paper (although it may take a super long time), and I have a hard time imagining how such process would itself be conscious.

In that case, you might be deeply skeptical that small animals have the right functional role at all, but once you grant they do, it is much more plausible that welfare ranges are similar to humans.

This assumes that the effect on welfare of having the right functional role is not moderated, or is only very weakly moderated, by physical quantities like the number of neurons (as in Bob's book). I find this very counterintuitive. It implies that a human who is the size of a galaxy would have the same welfare as a normal human.

However, for my point to be right, I think you just need to treat these kinds of functionalist views as in the running. You don't have to be confident that they're true.

I agree. However, I think the weight of models which are practically not sensitive to physical quantities could be astronomically low. Mistakes like the one I illustrated above about gravitational force happen when the weights of models are guessed independently of their consequences. I suspect the variance in weights should not be that different from the variance in the consequences. For example, for welfare ranges of a) 10^-100, b) 10^-10, and c) 1, I would guess weights not that different from 1 on a), 10^-10 on b), and 10^-100 on c).

Meanwhile, a goalkeeper taking a 100km/h shot to the face will insist it was a great experience because they made the save. Even they might look soft compared to a Thai boxer, who will literally laugh in their opponent's face after taking a heavy hit.

A typically painful experience may become less painful via training. In addition, even if it remains significantly painful, people could still consider it worth it for other reasons (like helping their team win).

But if we are contemplating something as extreme as zoocide, we had better be absolutely certain that their lives aren't worth living by their standards, not ours.

I meant that I guess the probability of wild invertebrates having positive/negative lives is roughly 50 % from their own perspective, which I agree is what matters. Imagine wild invertebrates have a welfare (from their own perspective) per animal-year of -1 (on some scale) with probability 99 %, and 1 with probability 1 %. Their expected welfare per animal-year would be -0.98 (= 0.99*(-1) + 0.01*1). Would you still oppose decreasing their population because it is not certain that their lives are negative? If not because it would be almost certain that they have negative lives, how low would the probability of them having negative lives have to be for you to oppose decreasing their population? Why any particular value? I would want to increase/decrease their population as long as they had a positive/negative expected welfare per animal-year if there were no more options besides changing their population. In practice, I think decreasing the very large uncertainty about whether they have positive or negative lives is a better option than changing their population.

Hi Derek.

You don't need evidence of a welfare range that is not too insignificant -- you need a presumption of a reasonable probability of a welfare range that is not too small and no significant evidence against it.

I find this wording a bit confusing. However, I think you mean that the expected welfare range will be significant (for example, at least 1 % of that of humans) as long as there is one plausible model (for example, which gets 10 % weight) which predicts a significant welfare range (for example, 10 % of that of humans). I have significant concerns about this kind of reasoning. I worry the weights of the models are close to arbitrary. In Bob Fischer's book about comparing welfare across species, there seems to be only 1 line about the weights. "We assigned 30 percent credence to the neurophysiological model, 10 percent to the equality model, and 60 percent to the simple additive model". People usually give weights that are at least 0.1/"number of models", which is at least 3.33 % (= 0.1/3) for 3 models, when it is quite hard to estimate the weights. However, giving weights which are not much smaller than the uniform weight of 1/"number of models" could easily lead to huge mistakes. As a silly example, if I asked random people with age 7 about whether the gravitational force between 2 objects is proportional to "distance"^-2 (correct answer), "distance"^-20, or "distance"^-200, I imagine I would get a significant fraction picking the exponents of -20 and -200. Assuming 60 % picked -2, 20 % picked -20, and 20 % picked -200, one may naively conclude the mean exponent of -45.2 (= 0.6*(-2) + 0.2*(-20) + 0.2*(-200)) is reasonable. Yet, there is lots of empirical evidence against this which the respondants are not aware of. The right conclusion would be that the respondants have practically no idea about the right exponent because they would not be able to adequately justify their picks.

And we don't have a good theory of how to get from neurons to welfare.

It would be great to have more research on this. I wonder whether electromagnetic (EM) field theories of consciousness could shed some light on it. I assume the maximum intensity of the EM fields generated by brain activity depends on the number of neurons, at least when assessed across species (there is little variance in the number of neurons of humans, which means the maximum intensity of EM fields may not vary much in humans).

Hi Jim. Thanks for the relevant post. I very much agree.

Many people prioritise animals with a higher probability of sentience. Humans, mammals, birds, finfishes, and then invertebrates. However, I suspect most do it for other reasons. A higher probability of sentience implies a higher chance of increasing (and decreasing) welfare, but people routinely take actions which are super unlikely to actually matter:

  • I calculate driving a car for 10 km in Great Britain without a seatbelt leads to 1 additional death with a probability of 1 in 73.0 M. The probability of sentience of shrimps presented in Bob Fischers' book about comparing welfare across species is 29.2 M (= 0.40*73.0*10^6) times as high (40 %).
  • Andrew Gelman found the probability of a voter in a small United States (US) state polling around 50/50 in a close election nationally changing the outcome of the national election could get as high as 1 in 3 million. The above probability of sentience of shrimps is 900 k (= 0.30*3*10^6) times as high.

I also believe there are other factors (besides the probability of sentience) which may be more important for the probability of a small donation increasing animal welfare.

I would take for granted that all animals are sentient (certain to have some kind of valenced experiences), and focus on assessing their welfare range as you suggest. I think there should be way more research on this given the large uncertainty. For example, for welfare range proportional to "individual number of neurons"^"exponent", and exponent from 0 of 2, which covers the range that I consider reasonable, I estimate that the Shrimp Welfare Project's (SWP's) Humane Slaughter Initiative (HSI) has increased the welfare of shrimps 1.68*10^-6 to 1.68 M times as cost-effectively as GiveWell's top charities increase the welfare of humans.

Load more