Bio

Participation
4

I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How others can help me

I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How I can help others

I can help with career advice, prioritisation, and quantitative analyses.

Comments
3081

Topic contributions
41

Thanks, Spiarrow. I just set up a periodic reminder to check the post as a result of your comment.

Here is some more context about the exponent-based approach.

I agree, assuming they are conscious.

You may be interested in these posts:

This suggests that neuron count alone is too crude a metric (though perhaps useful when comparing organisms with very different brains).

I like to compare the sentience-adjusted welfare ranges (probability of sentience times the welfare range conditional on sentience) of organisms with neurons assuming they are proportional to "individual number of neurons"^"exponent". I consider exponents from 0 to 2 reasonable best guesses. An exponent of 0.188 explains very well the sentience-adjusted welfare ranges presented in Bob's book (which rely on much more than the individual number of neurons). Below is a graph illustrating this.

For comparisons involving organisms with and without neurons, I would assume sentience-adjusted welfare ranges proportional to "individual mass"^"exponent", or "metabolic rate"^"exponent". I do not think the specific proxy matters that much. In allometry, "the study of the relationship of body size to shape,[1] anatomy, physiology and behaviour", "The relationship between the two measured quantities is often expressed as a power law equation (allometric equation)". If the sentience-adjusted welfare range is proportional to "proxy 1"^"exponent 1", and "proxy 1" is proportional to "proxy 2"^"exponent 2", the sentience-adjusted welfare range is proportional to "proxy 1"^("exponent 1"*"exponent 2"). So the results for "proxy 1" and exponent "exponent 1"*"exponent 2" are the same as those for "proxy 2" and "exponent 2".

Hi David. I think there are at least 2 Asimov's corollaries. Peter's Asimov's corollary is Asimov's corollary to Parkinson's law. The post you linked to presents Asimov's corollary to Clarke's 1st law. I also liked this corollary.

When, however, the lay public rallies around an idea that is denounced by elderly but distinguished scientists and supports that idea with great fervor and emotion—the distinguished but elderly scientists are then, after all, probably right.

Hi Riccardo.

The idea that experiential intensity simply scales down with the number of neurons seems hard to accept: it implies that simpler organisms live something like a barely-there flicker of experience, which also places us humans at the apex of perceived intensity in the universe.

Very simple organisms could still matter a lot despite having much less intense experiences. I estimate farmed animals and soil invertebrates have 1.87 and 253 times as many neurons as humans. The graph below has more detail. Nematodes are the animals with the least neurons, with an adult caenorhabditis elegans having 302 neurons, but I estimate soil nematodes have 169 times as many neurons in total as humans.

humans at the apex of perceived intensity in the universe

There are animals with more neurons than humans. Short-finned pilot whales and african elephants have 128 billion and 257 billion neurons, 1.49 (= 128/86) and 2.99 (= 257/86) times as many as humans.

I also think there's a distinction worth drawing between the "dimensionality" of an experience (how many qualitative states a mind can occupy) and its intensity. A simple mind might have very few "keys," but still hit each of them hard.

I agree.

Hi Aaron. I liked this post.

Math

Some days, I don't have any particular motivation. That's when I turn to expected utility.

Very funny.

Hi James. Thanks for the valuable post.

Hi Charles.

But even Farmkind’s more generous number is still disappointingly low. To many EAs I have spoken to, it seems to imply that going vegan is only “worth” a paltry $276—or alternatively, that donating at least $276 to pro-animal charities “buys” them the freedom from being vegan.

I estimate based on results from Animal Charity Evaluators (ACE) that donating 2.34 $/person-year (= 138/59.0) to The Humane League (THL) offsets the effects on farmed animals of (random) people in the United States (US), 0.848 % (= 2.34/276) of FarmKind's estimate.

we're dead OR there's infinite abundance

Your median date for this is in 2035?

Maybe would be open to "you transfer 1k to me now (2026), I give you interest-indexed 2k in 2035" or whatever odds make sense.

I made a similar bet in the past, but the one above is not worth it for me. Global stocks had annual real returns of 5 % from 1900 to 2022, and I expect faster growth from 2026 to 2035. For an annual real growth of 7.5 %, the bet would only be worth it if I won 1.92 (= (1 + 0.075)^(2035 - 2026)) times as many 2026-$ as those I initially invested. Assuming a 75 % chance of winning given the risk of you not paying me, I would need to win 2.56 (= 1.92/0.75) times as many 2026-$ as those I initially invested. Are you open to a bet like the one here, but resolving at the end of 2034, and involving a potential gain for me of 3 times as many 2026-$ as those I initially invested (for example, 1 k 2026-$)? If so, are you also open to disclaiming and confirming your identify? I would want this to ensure you have a greater incentive to respect the bet.

This kind of sounds like you're thinking of pain as a sort of physical magnitude, like weight or charge.

Yes.

I'm more inclined towards functionalist interpretations of welfare

Which kind of functionalism? I am very sceptical of at least computational functionalism (CF). Any algorithm run by a digital computer can be executed with pen and paper (although it may take a super long time), and I have a hard time imagining how such process would itself be conscious.

In that case, you might be deeply skeptical that small animals have the right functional role at all, but once you grant they do, it is much more plausible that welfare ranges are similar to humans.

This assumes that the effect on welfare of having the right functional role is not moderated, or is only very weakly moderated, by physical quantities like the number of neurons (as in Bob's book). I find this very counterintuitive. It implies that a human who is the size of a galaxy would have the same welfare as a normal human.

However, for my point to be right, I think you just need to treat these kinds of functionalist views as in the running. You don't have to be confident that they're true.

I agree. However, I think the weight of models which are practically not sensitive to physical quantities could be astronomically low. Mistakes like the one I illustrated above about gravitational force happen when the weights of models are guessed independently of their consequences. I suspect the variance in weights should not be that different from the variance in the consequences. For example, for welfare ranges of a) 10^-100, b) 10^-10, and c) 1, I would guess weights not that different from 1 on a), 10^-10 on b), and 10^-100 on c).

Load more