Bio

Participation
4

I am looking for work, and welcome suggestions for posts.

How others can help me

I am looking for work. I welcome suggestions for posts. You can give me feedback here (anonymously or not). Feel free to share your thoughts on the value (or lack thereof) of my posts.

How I can help others

I can help with career advice, prioritisation, and quantitative analyses.

Comments
2585

Topic contributions
33

And critically, I appreciate the clarification that "decreasing uncertainty" is your priority - I didn't realize that from past posts, but I think your most recent one is clear on that.

Yes, I think I could have been clearer about it in the past. Now I am also more uncertain. I previously thought increasing agricultural was a pretty good heuristic for decreasing soil-animal-years, but it looks like it may easily increase these due to increasing soil-nematode-years.

When I look at my own uncertainties of this kind, it feels almost like lying to put a precise number on them (I'm not saying others should feel this way, just that it is how I feel). So that's the most basic reason (among the other sort of theoretic reasons out there) that I feel attached to imprecise probabilities.

Makes sense. However, I would simply assign roughly the same probability to values (of a variable of interest) I feel very similarly about. The distribution representing the different possible values will be wider if one is indifferent between more of them. Yet, I do not understand how one could accept imprecise probabilities. In my mind, a given value is always less, more, or as likely as another. I would not be able to distinguish between the mass of 2 objects with 1 and 1.001 kg by just having them in my hands, but this does not mean their masses are incomparable.

Hi CB.

But right now, it seems less risky for me to donate farmed animals, at least to welfare reforms with much less impact on wild animals, like cage free campaigns. 

For individual welfare per animal-year proportional to "number of neurons"^0.5, I estimate that cage-free and broiler welfare corporate campaigns change the welfare of soil ants, termites, springtails, mites, and nematodes 1.15 k and 18.0 k times as much as they increse the welfare of chickens. I have little idea about whether the effects on soil animals are positive or negative. I am very uncertain about what increases or decreases soil-animal-years, and whether soil animals have positive or negative lives. So I am also very uncertain about whether such campaigns increase or decrease welfare (in expectation). I do not even know whether electrically stunning shrimp increases or decreases welfare, and I see it as one of the interventions where effects on non-target beneficiaries are the least important.

Thanks for the great post, Mal! I strongly upvoted it.

I argue that, when applied consistently across cause areas, none of these approaches suggest wild animal welfare is distinctively intractable compared to global health or AI safety.

Agreed. In addition, I do not think wild animal welfare is distinctively intractable compared to interventios focussing on non-wild animals. I am uncertain to the point that I do not know whether electrically stunning shrimp increases or decreases welfare in expectation, and I see it as one of the interventions where effects on non-target beneficiaries are the least important.

Another group of people attempting to include all moral patients in their analyses seem to basically reject cluelessness by trying to calculate (at least partially based on intuitions) the effects of interventions on as many questionably sentient moral patients as possible (for example, see this post). The idea is to come up with all the effects you can think of and assign precise probabilities to every possible outcome, even in the face of deep uncertainty. You can even assign some kind of modifier to capture all the “unknown unknowns.”

[...] As a result, your views become volatile: You might determine an AI policy is net positive today, then completely reverse that judgment months later, after minor updates. Although some may think that this outcome is an unfortunate but necessary aspect of the “right” decision theory, it is extremely hard to see how one might run a movement this way. Switching from endorsing bird-safe glass to not endorsing it on a monthly basis would lead to little impact and few supporters.

In cases where there is large uncertainty about whether an intervention increases or decreases welfare (in expectation), I believe it is very often better to support interventions decreasing that uncertainty. In the post of mine linked above, my top recommendation is decreasing the uncertainty about whether soil nematodes have positive or negative lives. I tried to be clearer about decreasing uncertainty being my priority here.

At the same time, I would not say constantly switching between 2 options which can easily increase or decrease welfare in expectation is robustly worse than just pursuing one of them. The constant switching would achieve no impact, but it is unclear whether this is better or worse than pursuing a single option if there is large uncertainty about whether it increases or decreases welfare.

Agreed, Noah. For 15 k shrimps helped per $, it would cost 9.60 k$ (= 144*10^6/(15*10^3)).

Thanks, Cyrill. I only received an email notification about your comment now for some reason. I no longer think AW interventions are more cost-effective than GHD ones, or that SWP's HSI is more cost-effective than cage-free corporate campaigns. I am very uncertain about whether any of these interventions increase or decrease welfare in expectation due to very uncertain dominant effects on soil animals and microorganisms. Here is an illustration of why I believe electrically stunning farmed shrimps may decrease or increase welfare, even if I was certain it increased the welfare of shrimps conditional on these being sentient.

Thanks for sharing! Have you considered comparing the performance of random humans with LLMs?

Hi Matt. Since you mentioned "vaccines", you may be interested in the podcast Hard Drugs.

Hard Drugs is a show by Saloni Dattani and Jacob Trefethen about medical innovation: how to speed it up, how to scale it up, and how to make sure lifesaving tools reach the people who need them the most. It is brought to you by Works in Progress and Open Philanthropy. Listen on your favorite podcast app or subscribe to our YouTube channel.

Thanks for clarifying, Caroline! I wonder whether sharing the calculations with disclaimers to avoid misinterpretations would help enough for it to be worth sharing.

Great post, Abraham! One way to think about this is that what matters is maximising "cost-effectiveness of advocating for a funder to support an intervention" = ("cost-effectiveness of the intervention" - "cost-effectiveness of what the funder is funding")*"money moved as a fraction of the money spent advocating for the intervention" = (CE_intervention - CE_funder)*"fundraising multiplier". As CE_intervention increase, CE_funder tends to increase, and "fundraising multiplier" tends to decrease. So it is unclear one should be advocating for the most cost-effective interventions.

Thanks for clarifying.

Steam is not simply "liquid water but a bit warmer"; steam has very different properties altogether.

I agree pains of different intensities have different properties. My understanding is that the Welfare Footprint Institute (WFI) relies on this to some extent to define their 4 pain categories. However, I do not understand how that undermines my point. Water and water vapor have different properties, but we can still compare their temperature. Liwekise, I think we can compare the intensity of different pain experiences even if they have different properties.

To extend this (very imperfect) analogy, imagine we lived in a world where steam killed people but (liquid) water didn't (because of properties specific to steam, like being inhalable or something). In this case, the claim "reducing sufficiently many units of lukewarm water would still be better than reducing a unit of steam" would miss the point by the lights of someone who cares about death.

I seem to agree. Assuming water had a potential to kill people of exactly 0, and steam had a potential to kill people above 0, no amount of water would have the potential to kill as many people as some amount of steam. However, I do not think this undermines my point. When I say that "averting sufficiently many hours of pain of a very low intensity would still be better than averting 1 h of pain of a very high intensity", the very low intensity still has to be higher than an intensity of exactly 0.

Load more