Bio

Participation
4

I am a generalist quantitative researcher. I am open to volunteering and paid work. I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How others can help me

I am open to volunteering and paid work (I usually ask for 20 $/h). I welcome suggestions for posts. You can give me feedback here (anonymously or not).

How I can help others

I can help with career advice, prioritisation, and quantitative analyses.

Comments
3082

Topic contributions
41

Hi Matthew.

Our actions have lots of unpredictable effects. If you drive to the store, you will delay everyone behind you in traffic. This will change when they next have sex, thus completely changing the identity of their future child. A different sperm and egg will fuse. This new child will go on to take a staggeringly large number of actions, each of which will change the identity of still more people. For this reason, because of your decision to drive to the store, the world hundreds of years down the line will be completely different.

I agree small actions like driving to the store may have large actual consequences. However, I believe their expected consequences are very small. I think the probability of any given child being born will be practically the same regardless of whether one drives to the store or not. One could tell a story where driving to the store leads to A being born instead of B. However, one could tell a story practically as convincing where driving to the store leads to B being born instead of A. So one should practically stick to the prior that driving to the store does change the probability of A or B being born. Likewise, driving to the store could cause a given hurricane H, but is almost as likely to prevent it. So the probability of hurricane H is practically the same regardless of whether one drives to the store or not.

@Derek Shiller, I would be curious to know your thoughts on the above.

Thanks, Spiarrow. I just set up a periodic reminder to check the post as a result of your comment.

Here is some more context about the exponent-based approach.

I agree, assuming they are conscious.

You may be interested in these posts:

This suggests that neuron count alone is too crude a metric (though perhaps useful when comparing organisms with very different brains).

I like to compare the sentience-adjusted welfare ranges (probability of sentience times the welfare range conditional on sentience) of organisms with neurons assuming they are proportional to "individual number of neurons"^"exponent". I consider exponents from 0 to 2 reasonable best guesses. An exponent of 0.188 explains very well the sentience-adjusted welfare ranges presented in Bob's book (which rely on much more than the individual number of neurons). Below is a graph illustrating this.

For comparisons involving organisms with and without neurons, I would assume sentience-adjusted welfare ranges proportional to "individual mass"^"exponent", or "metabolic rate"^"exponent". I do not think the specific proxy matters that much. In allometry, "the study of the relationship of body size to shape,[1] anatomy, physiology and behaviour", "The relationship between the two measured quantities is often expressed as a power law equation (allometric equation)". If the sentience-adjusted welfare range is proportional to "proxy 1"^"exponent 1", and "proxy 1" is proportional to "proxy 2"^"exponent 2", the sentience-adjusted welfare range is proportional to "proxy 1"^("exponent 1"*"exponent 2"). So the results for "proxy 1" and exponent "exponent 1"*"exponent 2" are the same as those for "proxy 2" and "exponent 2".

Hi David. I think there are at least 2 Asimov's corollaries. Peter's Asimov's corollary is Asimov's corollary to Parkinson's law. The post you linked to presents Asimov's corollary to Clarke's 1st law. I also liked this corollary.

When, however, the lay public rallies around an idea that is denounced by elderly but distinguished scientists and supports that idea with great fervor and emotion—the distinguished but elderly scientists are then, after all, probably right.

Hi Riccardo.

The idea that experiential intensity simply scales down with the number of neurons seems hard to accept: it implies that simpler organisms live something like a barely-there flicker of experience, which also places us humans at the apex of perceived intensity in the universe.

Very simple organisms could still matter a lot despite having much less intense experiences. I estimate farmed animals and soil invertebrates have 1.87 and 253 times as many neurons as humans. The graph below has more detail. Nematodes are the animals with the least neurons, with an adult caenorhabditis elegans having 302 neurons, but I estimate soil nematodes have 169 times as many neurons in total as humans.

humans at the apex of perceived intensity in the universe

There are animals with more neurons than humans. Short-finned pilot whales and african elephants have 128 billion and 257 billion neurons, 1.49 (= 128/86) and 2.99 (= 257/86) times as many as humans.

I also think there's a distinction worth drawing between the "dimensionality" of an experience (how many qualitative states a mind can occupy) and its intensity. A simple mind might have very few "keys," but still hit each of them hard.

I agree.

Hi Aaron. I liked this post.

Math

Some days, I don't have any particular motivation. That's when I turn to expected utility.

Very funny.

Hi James. Thanks for the valuable post.

Hi Charles.

But even Farmkind’s more generous number is still disappointingly low. To many EAs I have spoken to, it seems to imply that going vegan is only “worth” a paltry $276—or alternatively, that donating at least $276 to pro-animal charities “buys” them the freedom from being vegan.

I estimate based on results from Animal Charity Evaluators (ACE) that donating 2.34 $/person-year (= 138/59.0) to The Humane League (THL) offsets the effects on farmed animals of (random) people in the United States (US), 0.848 % (= 2.34/276) of FarmKind's estimate.

we're dead OR there's infinite abundance

Your median date for this is in 2035?

Maybe would be open to "you transfer 1k to me now (2026), I give you interest-indexed 2k in 2035" or whatever odds make sense.

I made a similar bet in the past, but the one above is not worth it for me. Global stocks had annual real returns of 5 % from 1900 to 2022, and I expect faster growth from 2026 to 2035. For an annual real growth of 7.5 %, the bet would only be worth it if I won 1.92 (= (1 + 0.075)^(2035 - 2026)) times as many 2026-$ as those I initially invested. Assuming a 75 % chance of winning given the risk of you not paying me, I would need to win 2.56 (= 1.92/0.75) times as many 2026-$ as those I initially invested. Are you open to a bet like the one here, but resolving at the end of 2034, and involving a potential gain for me of 3 times as many 2026-$ as those I initially invested (for example, 1 k 2026-$)? If so, are you also open to disclaiming and confirming your identify? I would want this to ensure you have a greater incentive to respect the bet.

Load more